id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
254219007 | pes2o/s2orc | v3-fos-license | Optimization of Basepoint Configuration in Localization of Signal Interference Device
: Precise navigation, as a method for guiding vehicles from one point to another, is an important subject these days especially in navigation of aircraft. Global navigation satellite systems (GNSSs) are capable tools for such a purpose. Any intentional or unintentional interference in satellite signals may cause risks of deadly accidents. Therefore, it is tremendously important to control airports or harbors and locate any existing radio frequency interference device. This localization can be done based on measuring time of arrival (TOA), angle of arrival (AOA), or time difference of arrival (TDOA) of signals from the device to sensors or receivers at some basepoints. In this article, a method is proposed based on these arrivals for optimizing the configuration created by these basepoints from a large grid of points covering a control area. Furthermore, a simulation test was performed to verify the theory, and after that a control network was designed and optimized for the international Landvetter Airport of Sweden. Our simulation studies show that when the AOA is used, our optimization is more robust with respect to the control grid resolution. In addition, optimization based on the TDOA improves the coverage over the control area with a significant reduction of error of control points, but because of the special geometric shape of the Landvetter Airport, such an optimization was not successful. DOI: 10.1061/(ASCE)SU.1943-5428.0000416. This work is made available under the terms of the Creative Commons Attribution 4.0 International license, https://creativecommons.org/licenses/by/4.0/.
Introduction
Today, almost all vehicles at the surface of the Earth are navigated by global navigation satellite systems (GNSSs).Electromagnetic signals, carrying satellite coordinates, are transmitted from satellites constantly toward all GNSS receivers mounted on vehicles.However, navigation of aircrafts needs more attention because any radio frequency interference in their navigation signal may lead to catastrophes for crew, passengers, and even people on the ground.Taking off and landing are the two most risky times of a flight and are close to an airport.Weather conditions, connection with the airport traffic control tower, and navigation signal are important factors for a successful landing or takeoff.Any intentional or unintentional interference in aircraft navigation and connection with a control tower might lead to serious problems and risk human life.Therefore, some sensors or receivers with the possibility of providing information about interference are needed.The geometric configuration of the basepoints, at which these sensors and receivers are placed, plays an important role in successful localization of an interference device.In this article, a method is developed and applied for optimization of such configuration around an area in such a way that any interference device can be localized with a higher precision.
Jamming and spoofing are two well-known types of signal interference.Jamming means transmitting a radio frequency signal into the same band as, or a band near to, the satellite navigation band of interest to prevent the signal to reach to the sensors or receivers, and spoofing is the transmission of a fake GNSS signal (Dempster 2016).There are studies showing that a simple and relatively cheap GNSS spoofer can be used to overtake, for example, a ship navigation without being detected; see Humphreys et al. (2008) and Divis (2013).Because the power level of GNSS signals is low, such signals are susceptible to interference; therefore, a relatively weak interference signal can jam a receiver (Dempster 2016).There are real examples that this interference affected operational infrastructures; see, for example, Balaei et al. (2007), Clynch et al. (2003), Grant et al. (2009), Hambling (2011), Motella et al. (2008), and Pullen et al. (2012).Specifically, we can point to unintentional cases such as a faulty TV amplifier, which jammed the global positioning system global positioning system (GPS) operation at a harbor in Monterey, California, for 37 days (Clynch et al. 2003).A small jammer that was used in a delivery van disrupted the ground-based augmentation system (GBAS) system aiding aircraft approaches at Newark Airport while driving on a nearby highway in 2009 (Hambling 2011;Pullen et al. 2012;Warburton and Tedeschi 2011).The Central Radio Management Office of South Korea reported several disruptions from 2010 to 2012 due to GPS jammers (Seo and Kim 2013).In Australia, Balaei et al. (2007) detected some interference and in Italy Motella et al. (2008) found some from TV signals in the GNSS band, disrupting GPS.Recognition of an interference signal among all scattered signals is a complicated process and requires skills in signal processing, which is outside the scope of this paper.
Location of an interference device can be determined from basepoints with sensors or receivers, which are able to detect and analyze an interference signal.The time of arrival (TOA), angle of arrival (AOA), and time difference of arrival (TDOA) are known measurements being used for localization.Drake and Dogancay (2004) performed the localization problem on prolate spheroidal coordinates and stated that the mathematical equations of the TDOA will be greatly simplified by them.The equations will be linear corresponding to the hyperbolic asymptotes of the TDOA.Ananthasubramanian and Madlhow (2008) investigated AOA measurements and developed a sequential algorithm; they concluded that the localization error is proportional to the AOA error variance and coverage area, and reducible by increasing the number of estimates.Thompson et al. (2009) studied the configuration of the sensors' positions for localization of interfering devices, presented a method using differences of received signal strength measurements, and concluded that they can be alternatives to the TOA, AOA, and TDOA.Thompson (2013) investigated the interference device detection and localization by analyzing the dilution of precision (DOP), from received signal strength, and concluded that TDOA is superior to the received signal strength measurements.Numerous literature studies exist regarding localization of GNSS interference using the mentioned quantities; however, optimization of configuration of the basepoints is a novel idea, which is presented in this paper.
In geodetic network optimization, one of the purposes is to determine the optimal configuration of networks for maximizing the precision and reliability because the configuration has significant influence on the quality of the network; for more details, see Koch (1982Koch ( , 1985)), Xu (1989), Kuang (1996), Eshagh and Kiamehr (2007), and Eshagh and Alizadeh-Khameneh (2014).Stochastic and evolutionary methods such as genetic algorithm (GA) (Batilović et al. 2021), particle swarm optimization (PSO) (Yetkin et al. 2009;Singh et al. 2016), generalized particle swarm optimization (GPSO) (Batilović et al. 2022), and simulated annealing (SA) (Berné and Baselga 2004) are applicable for geodetic network optimization as well.
The configuration, or the geometry formed by the basepoints for localization, is also a type of geodetic network.However, in the geodetic network optimization, the configuration is optimized by varying the unknown control points in such a way that the desired precision for these points is achieved, while in optimizing a wireless network for localization of interference device the configuration of the known basepoints is optimized by varying the basepoints to reach the desired precision for the control points.So far, no optimization has been used to determine optimal configuration of wireless security networks around airports, harbors, or important infrastructures.
In this article, a new approach is proposed to determine the optimal configuration of the basepoints, at which the sensors or receivers are placed.The TOA, AOA, and TDOA of signals to these sensors and receivers are considered as observables and a criterion matrix is selected for required precision of the points over the control area.The basepoints move until the estimated variancecovariance (VC) matrices of the control points are fitted to the criterion matrix subjected to some constraints.A grid of points is designed over the control area and each one of its cells is the probable location of an interfering device.The location of basepoints is determined in such a way that the location errors of the grid points are minimized based on the type of observables and fit to the predefined criterion matrix for the grid points.
Control Network and Observables
Precision of a network depends on both the quality of measurements and the network configuration.In this section, a two-dimensional control network for localization is defined and after that the observables TOA, AOA, and TDOA as well as their mathematical models are presented.
Control Network
The control network is a grid of points over an area in addition to some basepoints from which an interfering device is localized by some measurements.The grid points, which are called hereafter control points, can be regarded as the probable position of this device.Fig. 1 illustrates a schematic control network; the basepoints and control points are respectively shown by triangles and small circles.Configuration means the geometric form, created by the basepoints; e.g., in Fig. 1, the three basepoints form a triangle as the configuration.Considering more basepoints leads to a more complicated configuration and a harder optimization process, but it is possible.The resolution of the grid, or the distance between the control points, is defined as the resolution of design.The spacing between the points along the xand y-axes may not be same, it depends on the designer's opinion.
The main goal of considering a grid of points to cover an area is that the positions of the control points can be determined locally from the grid, e.g., the point at the left-down corner of the grid can be regarded as a local coordinate system origin and the first column as the y-axis and the lowest row as the x-axis.Because the grid resolution is known, coordinates of all control points can be simply determined.The important issue in optimization is the precision of these points when their coordinates are determined from the basepoints using some observables.
Observables
Our goal is to determine the horizontal position of an interfering device from some basepoints based on two-dimensional observables of TOA, AOA, and TDOA.If the velocity of the interfering signal is known, by measuring the TOA, its distances from the basepoints can be determined.Generally, localization with the TOA requires precise time synchronization of transmitter and receivers so that distances can be computed from the measured TOAs.However, in a two-dimensional (2D) localization by using at least three sensors or receivers, the transmission time can be considered as an extra unknown in the system of equations and approximated simultaneously with the coordinates of the transmitter.Today's GNSS environmental monitoring systems (GEMSs) are probably able to measure this signal arrival.An advantage of using the TDOA is that the transmitter needs neither synchronizing with the receivers and sensors (see Gustafsson 2018, p. 78) nor sending the transmission time.The TDOAs between the basepoints are estimated by crosscorrelation processes among the received signals (e.g., Lindstrom et al. 2007).In addition, there are new GEMSs consisting of several low-cost sensors to monitor GNSS system performance in a specific area (Trinkle et al. 2012).The AOA can also be determined at each basepoint using antenna arrays; see Trinkle et al. (2012) for mathematical modeling and estimation of AOA from these arrays (Trinkle et al. 2012;Huang et al. 2022).
The mathematical formula of a distance between the interfering device at the point i and the jth basepoint is Fig. 1.Control network, control points, basepoints, and configuration.
where x j and y j ¼ xand y-coordinates of the basepoint; and x i and y i = coordinates for the interfering device.
The AOA is, in fact, the direction from which the interfering signal enters a sensor.This angle has the following mathematical expression in terms of the interfering device and the basepoint coordinates (e.g., Trinkle et al. 2012;Gustafsson 2018): The range difference (d ijk ) from an interfering device at point i and two basepoints at j and k has the following mathematical formula (Trinkle et al. 2012;Gustafsson 2018): where x i and y i = coordinates of the interfering device; and x j , y j and x k , y k = pair coordinates of the jth and kth basepoints.
Our goal is to find an optimal configuration for the basepoints in such a way that the error of the position of the interfering device is minimized.No measurement, speed of signal, and errors in the TOA, AOA, and TDOA are needed for such a design problem.When the location of the basepoints is optimally determined, stations equipped by sensors or receivers can be established for measuring the TOA, AOA, and TDOA.However, the risk of lack of connection between the basepoints and grid points always exists, which means the sensors and receivers at these points should be able to receive interference signal; otherwise, localization is not possible.Therefore, enough of such basepoints should be considered for covering the control area.
Solution of the Coordinates and from the Basepoints
First, assume that the basepoints have known coordinates and we want to see how large the errors of the control points are over the area based on our observables.However, because the observables have nonlinear mathematical models, they need to be linearized.Generally, a linearized model is of Gauss-Markov type for a control point over the area (see Koch 2010) where A = coefficient matrix, whose elements are partial derivatives of observables with respect to coordinates of the control point; x = vector of the coordinate updates to their initial values; L = vector of difference between actual and computed observations from the initial coordinates; ε = vector of random errors with Efεg ¼ 0, where Efg stands for the statistical expectation; C L = VC matrix of the observations; Q = cofactor matrix; and finally σ 2 0 = a priori variance of unit weight.In Eq. (2a), the only parameter that carries the geometrical properties of the basepoints and the control point is the matrix A. This matrix contains the partial derivatives of observables with respect to the coordinates of the unknown control point, which are, in fact, sensitivities of the observables with respect to the coordinates.For example, in the case of 2D localization of one interference device from m basepoints, the structure of matrix A for TOA, AOA, or TDOA is ; where w ¼ TOA; AOA; or TDOA and i ¼ 1; 2; : : : ; n ð2bÞ where the elements of the matrix A are The least-squares solution of Eq. ( 2a) is (e.g., Cooper 1987) x and the VC matrix of the estimated coordinates No observation is needed to compute the VC matrix of the estimated coordinates and the configuration can be optimized using Eq.(2g) and some a priori values of the measurement quality.
Eq. (2a) describes a system of equations connecting the observables to two-dimensional coordinates of an unknown control point.Obviously, at least two observables from two different basepoints are needed to have a unique solution for the system.In the case of having more basepoints, the least-squares method is applicable.
Suppose that three basepoints and three observables are used, and then the system in Eq. (2a) will have three equations and two unknowns for one control point.Our goal is to optimize the configuration of these basepoints so they cover the control area.Consequently, many such points should be considered over this area, which are probable locations of an interfering device.
Optimal Configuration of the Basepoints Based on the Desired Errors of Control Points
As shown, the VC matrices of control points are determined from the observables and location of the basepoints.The coordinates of these control points are known as the size of the grid and its cells are designed by us.Then the basepoints are considered as unknowns but an initial configuration is needed, which can be derived from the local coordinate system defined by the grid.Therefore, initial VC matrices for all control points are computed and after that the configuration is optimized by varying the coordinates of the basepoints.
A criterion matrix is needed for the optimization process, which should be a priori known from the expected variances of the control points.In a two-dimensional network, each VC matrix has four elements, two variances as diagonal elements and two equal covariances as off-diagonal ones.Normally, a diagonal matrix with the desired variances for the coordinates of the unknown points on its diagonal elements is considered as the criterion.The initial VC matrices should be fitted to this criterion in a least-squares sense by varying the coordinates of the basepoints.In other words, by optimization the locations of the basepoints are determined in such a way that the VC matrix of the control points is fitted to the criterion.
To present this idea mathematically, the VC matrix of one point [Eq.( 2g)] is expanded for the basepoints by the Taylor series (e.g., Xu 1989;Koch 1982Koch , 1985;;Kuang 1996) where A = initial coefficient matrix derived from the approximate positions of the basepoints; Δx j and Δy j = coordinate updates to the basepoints' positions; and m = number of basepoints.It will not be difficult to show that (see also Kuang 1996; Eshagh and Alizadeh-Khameneh 2014) where the structure of ð∂AÞ=ð∂x j Þ or ð∂AÞ=ð∂y j Þ, which we show that by ð∂AÞ=½∂x j ðy j Þ for the TOA and AOA measurement in a 2D case of m basepoints and one interference device, will be The elements of these matrices are C x on the left-hand side of Eq. ( 3a) is the criterion matrix, and σ 2 0 ðA T Q −1 AÞ −1 , which is the initial VC matrix, should be fitted to C x. Let us rewrite Eq. (3a) in the following form: C x is a 2 × 2 matrix having the variances of the xand y-coordinates on its diagonal elements and covariance between them on the off-diagonal.Eq. (3l) means that the updates Δx j and Δy j to the initial coordinates x j and y j are estimated in such a way that the updated C 0 x is fitted to the desired C x.Because in this study three basepoints are considered and each has two coordinates (x and y), six unknown parameters exist in the system of equations.
Our VC matrices are 2 × 2, meaning they contain four elements and our system has four equations for each control points.However, there are many points already designed over the control area, and each one adds four equations to our systems, while the number of the unknown parameters remains constant.In this case, a huge system of equations is created and the coordinates of the basepoints are estimated in such a way that the best fit to all criterion matrices for the control points are achieved.Let us present this system in the following form: where ε 0 = vector of residuals; ΔL = vector of differences between the elements of the criterion and initial VC matrices; B = coefficients matrix containing the partial derivatives of the VC matrix with respect to the basepoints' coordinates; and Δx = basepoints' coordinate updates.The mathematical descriptions of them are where operator vec inserts the columns of the VC matrices below each other and converts the 2 × 2 matrices to 4 × 1 vectors; ðÞ T stands for transposition operator of matrix algebra; and n = number of control points.
Constraints
At first glance, one expects to solve the system of Eq. ( 4a) by the least-squares method.However, unrealistic results may be obtained, for example, some basepoints may be located far from the area or they may be located close to the center and create a weak geometrical configuration.Hence, some constraints are needed for solving this system of equations, like: • The basepoints should be around the area.
• Their average xand y-coordinates should be at the center of the area.• They should be well distributed around the area.
The use all these constraints depends on the study area and how much freedom our basepoints must move during the optimization process.These constraints act like inner constraints (e.g., Cooper 1987;Tan 2005) that are applied in geodetic network adjustment for overcoming rank deficiency of the established system of equations due to datum defect.However, our system does not have this deficiency because the grid points already have known coordinates.These constraints are used to limit the movements of the basepoints.
Limiting Search Area for the Basepoints
To keep the basepoints outside the control area, some inequality constraints are applied for limiting the search domain of the coordinate variations; such inequality constraints are (compare Kuang 1996; Eshagh and Alizadeh-Khameneh 2014) where w L j and w U j = lower and upper bounds of the inequality that contain limiting x-coordinates of the jth basepoint, respectively; and v L j and v U j = corresponding limits for the y-coordinates.To write these constraints in terms of the coordinate updates being estimated from the optimization process, Eqs. ( 5a) and (5b) are written in the following forms (compare Eshagh and Alizadeh-Khameneh 2014): According to Eqs. ( 5c) and ( 5d), the coordinate updates in such a way that the coordinates remain in the specified interval [Eqs.(5a) and ( 5b)] outside the area.
These inequality constraints can be written in the following vector form: where Average Coordinates of the Basepoints at the Center of Area To keep the average of the basepoints' coordinates at the center of control area, the following equations can be used (compare Cooper 1987;Kuang 1996): where x G and y G = coordinates of the center of the control area, which can be determined from the control points; and m ¼ 3 is the number of the basepoints.
Because the nonlinear mathematical models are linearized in our optimization process, Δx j and Δy j are estimated and not the coordinates directly.Therefore, Eqs.(6a) and (6b) need to be rewritten in terms of these coordinate updates or where Distribution of the Basepoints around the Area Bearing angles from the center of the area to the basepoints can be used for making a constraint for distributing the basepoints around the area.By assuming that the summation of all these bearing angles is zero, we can write (compare Kuang 1996) Again, Eq. ( 7a) should be written in terms of the coordinate updates; therefore, its linearized form will be applied where l jG = distance from the center of control area and the basepoint j.Eq. ( 7b) can be written in the following matrix form (compare Kuang 1996): where
Optimization Model
The system of equations in Eq. (4a) should be solved for the coordinate updates in a least-squares sense but subjected to the mentioned constraints.Such an optimization model is (compare Koch 1982Koch , 1985;;Kuang 1996) min Considering all these constraints, Eq. ( 8) might not be possible in practice.As shown in the synthetic test, all basepoints have the freedom to move, but their movements can be limited by the three constraints so they can move around the area using the search area constraint and around the center of the area with a good distribution.However, in the real study for the international Landvetter Airport, based on the presence of forests around the area, keeping the basepoints around the area is not practically possible and only the limiting search area around each basepoint is suitable to keep them in area.
Today, there are different software programs for solving the optimization problem in Eq. ( 8).The theory of solving this problem is known (e.g., Bazaraa and Shetty 1976;Grafarend and Sanso 1985).However, the optimization problem in Eq. ( 8) is not global because of the involvement of different constraints.The unconstrained optimization may diverge, or the basepoints may move outside the area or toward themselves or become colinear.Applying at least the search area constraint is a necessity for obtaining a convergent solution.The resolution of the grid of control points plays a role in the rate of convergence and computational time.In addition, a threshold should be a prior defined to stop the iterative optimization; in this study, the norm of the coordinates updates is computed and when it becomes smaller than 10 cm during the iterations, the optimization is stopped.
Synthetic Numerical Test
To test the presented theory, a rectangular area of 4 × 5 km is considered with resolutions of 10, 20, 40, and 80 m.Three basepoints, M, N, and O, are chosen manually around the area; see their initial coordinates in Table 1 and their locations in Fig. 2(a) by small circles.Fig. 2(a) shows the pattern of the square root of the trace of the VC matrix of the control points, i.e., horizontal dilution of precision (HDOP), based on the TOA observables and a resolution of 40 × 40 m.Because the HDOP patterns for different resolutions were similar, only one is presented.Generally, the HDOP based of the TOA has no units, but we considered a priori precision of 1 m for the TOA distances to give a unit to the computed HDOP.Such a precision seems to be large for geodetic networks, but it is acceptable in localization of the interference device by wireless networks (e.g., Trinkle et al. 2012).As observed, the largest error reaches to about 2.2 m, close the farthest point O at the upper-right corner of the area.A diagonal criterion matrix with equal diagonal elements to 2 m was considered for fitting the VC matrices of the control points to it.The basepoints were kept 400 m, as an example, outside the area from the margins by using the inequality constraint in Eq. (5e).Fig. 2(b) shows the optimized location of the basepoints on the map of the HDOP.A significant reduction of HDOP is seen with the maximum reducing to 1.6 m.In addition, the configuration forms almost an equilateral triangle, which is more stable than the initial design forming an isosceles triangle.However, the weak points having larger HDOPs are seen around the basepoints and elongated toward the center of the area.This could be expected because two of the three distances from each control point and to the three basepoints are considerably longer than the other one, forming a weaker geometry than the points located at the center of the area.
A similar HDOP pattern is seen when the AOA are considered.The same criterion matrix was considered for the control points and a priori variance factor of 60 arc-seconds as the accuracy the measured AOAs.Fig. 2(c) shows that the HDOP before optimization reaches to about 3.5 m, and after optimization, Fig. 2(d), it significantly reduced to about 1.8 m with a symmetric configuration.
Fig. 2(e) is a similar HDOP for the TDOA, with a priori variance factor of 1 m, based on the same resolution before optimization.It shows that the HDOP reaches to about 3.5 m and the weak points are seen at the corners.Generally, a better coverage over the area can be seen using the TDOA.The HDOP after optimization is shown in Fig. 2(f).As observed, a symmetric geometry is created by the optimization process and the weak points are around the corners of the area, but with a proper coverage over the inner area meanwhile reducing the HDOP to 2.5 m.
The HDOP derived from the TDOA gives a good coverage over the area, and the weak points have large HDOPs and are not close to the basepoints.However, the mathematical models are more complicated because of their differential nature.
To test sensitivity of our optimization method based on the observables, the process was repeated with different resolutions.Table 1 gives the optimal horizontal positions of the three basepoints after optimization, while the initial positions of these points were M (0.5, 2,500.5),N (2,000.5,0.5), and O (4,000.5, 5,000.5)prior to optimization.
According to Table 1, when the TOA is used, the y-coordinate of M remains the same for resolutions 10 × 10 m and 20 × 20 m, but the x-coordinate remains constant for all.There are variations in the x-coordinate of N but it is almost constant for these resolutions as is the x-coordinate of O. Generally, with resolutions less than 20 × 20 m, the optimized coordinates of the points are almost the same as the observable.
Similar results are obtained when the AOA is used.A resolution of 30 × 30 m is good enough for optimization, but for TDOA no convergence is seen by increasing the resolution.Nevertheless, the differences between the coordinates obtained from optimization using the two successive resolutions are in order of the resolutions.Therefore, the TDOA is more sensitive to the selected resolution and no specific resolution can be determined for optimization based on it.
Generally, applying a dense grid for the control points is suggested, but it takes a huge computational burden and long time to perform the optimization process.One important issue that needs to be highlighted is that our study area is rectangular 4 × 5 km, close to a square.
Real Case Study: Landvetter Airport
So far, our simulation study showed that the presented theory works with some constraints.Designing a security network is not as simple as what was done in the simulation study unless the area is flat with no signal hindrance.Here, an international airport is considered and positions of three basepoints are optimized in the designed control network with the assumption of the presence of the interference device inside the airport.Not all our proposed constraints can be applied for this study area.First, there are forests around, which are not suitable places for our sensors.Therefore, spreading the basepoint outside and around the airport is not realistic.Second, search areas around the basepoints are limited because there is not much freedom for them to move.
The international Landvetter Airport of Sweden in Gothenburg was selected, and a local planar coordinate system was defined for the airport.The Gaussian radius of curvature at the system origin was computed from its latitude at the surface of the WGS84 reference ellipsoid with a ¼ 6,378,137 m, and e 2 ¼ 0.0068.A grid with a resolution of 40 × 40 m was considered over the airport; was rotated based on azimuth of the most left takeoff way (extracted from Google Maps), and later their coordinates were transformed to the geodetic coordinates where N and M = well-known radius of the prime vertical and curvature of the local meridian at the point with the latitude φ O , respectively.
Landvetter Airport's runway is elongated in southwest-northeast direction with an approximate bearing angle of 26°.The area is almost rectangular, with length of 5 km and width of 2 km.Based on the geometry of the airport, no symmetric configuration can be considered for the basepoints.Therefore, from the satellite photo and Google Earth, we selected three suitable places; Fig. 3 shows thee selected areas by the squares for optimal locations of the basepoint.In fact, an initial position was considered for each basepoint, which varied in a square with the size of 400 m. and unrealistic, and even if the optimization was performed the constraints were violated and the basepoints moved outside the area.For this reason, no result is presented for design using the TDOA.Table 2 lists the initial and optimized positions of the basepoints M, N, and O. Optimization based on the TOA and AOA leads to the same positions for the basepoints M and O, but N is more influenced by the type of observables.Generally, M and O are closer to each other, but N is far from both.These points form a triangle having a small angle at N. It is known that an equilateral or equiangular triangle is a strong geometric form.When a triangle has a different form, it has weaker geometry, such as when M, N, and O form a triangle having such a small angle at N. This is the reason N moves during the optimization process, because it has direct influence on the strength of the geometry of the basepoints.
Conclusions
TOA, AOA, or TDOA of signals into sensors or receivers can be used for localization of a signal interference device.This article showed that the optimal geometrical configuration formed by basepoints, at which these signal arrivals are measured, strongly depends on the type of arrival and the ability of sensors or receivers to measure these arrivals.A design using the AOA showed a better stability and robustness with respect to the resolution of the designed grid compared to the rest of its rivals, and optimization based on the TDOA improves coverage over the control area.Generally, our optimization method needs some constraints for stabilizing the solution; otherwise, the basepoints may form improper configuration during the iterative optimization process, e.g., they may move toward each other or become collinear.However, in our real case study, the search domains for the basepoints were limited and subjecting the optimization process to extra constraints, as done in the simulation test, led to infeasible solutions.In other words, the choice of the constraints is highly dependent on the shape of the area.
Fig. 3 .
Fig. 3. Landvetter Airport of Sweden and the search areas, marked by rectangles for estimation of the optimal position of the basepoints.(Base image © 2018 Google Earth.)
Table 1 .
Optimal coordinates for the basepoints M, N, and O based on resolution of design and type of observable
Table 2 .
Initial and optimized coordinates of basepoints before and after optimization | 2022-12-04T18:18:31.904Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "5510041f235c4c08ae86c5cf639bc0e4b3825aa7",
"oa_license": "CCBY",
"oa_url": "https://ascelibrary.org/doi/pdf/10.1061/(ASCE)SU.1943-5428.0000416",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "26633caa8a88d3ac52dedcf236413f13cd8bc297",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
268533590 | pes2o/s2orc | v3-fos-license | Converting Sugars to Biofuels: Ethanol and Beyond
To date, the most significant sources of biofuels are starch- or sugarcane-based ethanol, which have been industrially produced in large quantities in the USA and Brazil, respectively. However, the ultimate goal of biofuel production is to produce fuels from lignocellulosic biomass-derived sugars with optimal fuel properties and compatibility with the existing fuel distribution infrastructure. To achieve this goal, metabolic pathways have been constructed to produce various fuel molecules that are categorized into fermentative alcohols (butanol and isobutanol), non-fermentative alcohols from 2-keto acid pathways, fatty acids-derived fuels and isoprenoid-derived fuels. This review will focus on current metabolic engineering efforts to improve the productivity and the yield of several key biofuel molecules. Strategies used in these metabolic engineering efforts can be summarized as follows: (1) identification of better enzymes; (2) flux control of intermediates and precursors; (3) elimination of competing pathways; (4) redox balance and cofactor regeneration; and (5) bypassing regulatory mechanisms. In addition to metabolic engineering approaches, host strains are optimized by improving sugar uptake and utilization, and increasing tolerance to toxic hydrolysates, metabolic intermediates and/or biofuel products.
Introduction: Sources of Sugars for Biofuel Production
Ethanol and biodiesels have been industrially produced from biomass by fermentation and chemical trans-esterification of plant oils, respectively. For example, sugarcane-derived sugars (sucrose) have been used for ethanol fermentation in Brazil, and corn-derived starches (glucose) have been the major feedstock in the USA. Since consumption of these feedstocks for biofuel production competes with demands for animal feeds and human consumption [1], lignocellulosic biomass (LCB) has been suggested as an alternative and sustainable feedstock for biofuel industries. In using biomass for microbial fermentation, both non-LCB and LCB require pretreatment and hydrolysis of raw feedstock to release fermentable sugars from biomass consisting of complex and polymeric structures ( Figure 1). The hydrolysis process of non-LCB such as corn starch has been well established in existing fermentation industries, but deconstruction of LCB has been limited due to the resistance of LCB against chemical and enzymatic treatment [1]. Moreover, hydrolysates of LCB include a mixture of pentose and hexose, inhibitory compounds (e.g., furfural, phenols) and toxic solvents produced during pretreatment, which all make downstream microbial fermentation difficult. Therefore, there have been studies to establish microbial hosts that co-utilize pentose and hexose, and to engineer tolerance of the microbial hosts against the above-mentioned toxic components [1]. Beyond native fermentation pathways or natural biodiesel resources such as vegetable oils, advanced biofuel molecules are now synthesized in microbial hosts where heterologous or synthetic metabolic pathways are reconstructed in fermentative hosts. In this review, we will summarize recent achievements and progress in microbial biofuel production with updates on biofuel production from LCB-derived sugars. Various pathways and hosts for ethanol and advanced biofuel production will be discussed, with a particular emphasis on metabolic engineering strategies to improve the microbial conversion bioprocess.
Fermentation Pathways and Hosts for Ethanol Production
Ethanol is produced from glucose via fermentative consumption of pyruvate [2]. Glycolysis is a metabolic process that converts glucose to partially oxidized product, pyruvate, while supplying ATP for biomass production. Subsequently, under anaerobic conditions, pyruvate can be fermented to ethanol by sequential reactions of pyruvate decarboxylase (PDC) and alcohol dehydrogenase (ADH) while losing one carbon as carbon dioxide (CO2). The ethanol fermentation process has been extensively studied and exploited in Saccharomyces cerevisiae (yeast) and Escherichia coli [1,3], due to the relative technological maturity in genetically engineering these microbes. Other species have also been considered as production hosts due to advantages of their native enzymes and pathways. For instance, Zymommonas mobilis has been suggested as an alternative host to yeast due to its advantage for ethanol yield since it utilizes the Entner-Doudoroff (ED) pathway instead of Embden-Meyerhof-Parnas (EMP) pathway for glycolysis [4] (Figure 2). Although the EMP pathway is a major glycolysis route in most eukaryotes and prokaryotes, glycolysis pathways are much more diverse in prokaryotes [5]. Among variants of the glycolysis pathway, the ED pathway is the most abundant route together with EMP pathway in some prokaryotes such as Z. mobilis [6]. While the EMP pathway produces two ATPs from each glucose molecule consumed, the ED pathway produces only one ATP molecule from one glucose molecule. Given that ATP is tightly coupled with anabolism and cell growth, ED pathway-utilizing Z. mobilis produces less biomass than EMP pathway-dependent species such as S. cerevisiae and E. coli. Consequently, Z. mobilis has more available carbons for ethanol fermentation with 2.5-fold higher specific ethanol productivity than that of S. cerevisiae, and produces up to 97% theoretical yield [7]. In addition, Z. mobilis has been engineered to co-utilize glucose, mannose and xylose, expanding their capability for ethanol fermentation of LCB-derived sugars [8]. Clostridia, on the other hand, have advantages over S. cerevisiae because Clostridia naturally secrete enzymes that are capable of hydrolyzing complex carbohydrates (oligosaccharides and polysaccharides) into fermentable sugars and utilizing both hexose and pentose [9,10]. As a result, Clostridia have been suggested as a candidate host for biofuel production from LCB for consolidated bioprocessing (CBP). However, Clostridia are strict anaerobes and their growth is relatively slower than other microbial hosts, which makes fermenter operation difficult. Various aspects of Clostridia for industrial use have been well summarized in a previous review [11].
In this section, we briefly introduce various aspects on selection of microbial hosts for ethanol production: energetics of the glycolysis pathways, available genetic engineering tools, flexibility in sugar utilization and compatibility to effective fermenter operation. The following section will discuss metabolic engineering strategies that have been applied to optimize and improve microbial hosts for ethanol production.
Metabolic Pathway and Host Engineering for Ethanol Production
Since sugars are both carbon and energy sources for biomass and ethanol production, more efficient uptake and utilization of various sugars are important factors that can improve ethanol productivity. For example, the uptake rate of sucrose was improved in yeast for more efficient utilization of sucrose from sugarcanes [12]. In this study, a S. cerevisiae strain with intracellularly localized sucrose invertase (iSUC1) was evolved in sucrose-limited chemostat, and the evolved strain showed higher sucrose-proton symporter activity and increased ethanol yield by 11% [12]. In general, most industrial microbial hosts have a good capability of utilizing hexose, but restricted capability in utilizing pentose such as xylose, the second most abundant sugar in biomass, and arabinose due to lack of the pentose utilization pathway and a catabolite repression in the presence of glucose. Although there have been efforts to expand the substrate utilization capability of yeast by engineering the substrate affinity of sugar transporters towards pentose rather than hexose, no significant progress has been made yet [13]. In one study, transporter mutants of GXS1 from Candida intermedia and XUT3 from Scheffersomyces stipitis were expressed, and it showed an increased growth rate of yeast on xylose by 70% and a changed pattern of diauxic shifts [14]. In a following study, Young and colleagues identified a sequence motif of G-G/F-XXX-G, of which saturation mutagenesis generated transporter mutants that have an exclusive specificity for xylose but not for glucose. All these mutants, however, were still found to be repressed by glucose [15]. In another study, Farwick and colleagues [16] screened glucose-insensitive xylose transporter mutants. In this study, they found a mutation at either of two conserved residues located near the entrance of sugar-binding pocket, and a mutant of a yeast hexose transporter (Gal2-N376F) was identified to have the highest affinity for xylose among mutants without glucose transport activity [16].
Another metabolic engineering strategy is to maximize the flux of sugars to ethanol while minimizing the flux to biomass or to other fermentation byproducts such as glycerol. Since formation of highly reduced fermentation products such as glycerol is driven by accumulation of cytosolic NADH, S. cerevisiae have been engineered to maintain lower level of cytosolic NADH by various genetic modifications such as the deletion of NADPH-dependent glutamate dehydrogenase (GDH1) along with the overexpression of glutamate-ammonia ligase (GLN1) and glutamate transporter (GLT1); and substitution of innate glyceraldehyde-3-phosphate dehydrogenase (GAPDH) with heterologous GAPDH from either Bacillus cereus or Streptococcus mutans. The latter approach aimed to decrease cytosolic NADH formation and ATP production by using an alternative non-phosphorylating, NADP + -dependent glyceraldehyde 3-phosphate dehydrogenase (GAPN) of GAPDH [13]. In addition to the resorts to decrease cytosolic NADH, metabolic pathways of ethanologenic E. coli were redesigned at the systems level by elementary mode analysis for metabolic coupling of biomass and ethanol production. Deletion of 9 genes in central metabolism was suggested by elementary mode analysis, and the engineered E. coli strain produced 90% of theoretical yield after 48 h fermentation [17]. On the other hand, carbon fluxes to biomass, organic acids and ethanol were re-distributed by heterologously expressing pyruvate decarboxylase (PDC) and alcohol dehydrogenase (ADHII) of Z. mobilis in Streptomyces lividans TK24 [18].
In addition to metabolic engineering efforts to diversify sugar utilization of microbial hosts and to regulate carbon metabolism fluxes, there have been other approaches to improve ethanol production, especially by improving the industrial bioprocess. One example is to improve resistance toward ethanol itself, growth inhibitors and toxic components from LCB-hydrolysates as well as other general stresses in ethanol-producing hosts [13]. Recently, extensive studies have been performed on ethanol production from LCB, and the advances are well summarized in a recent review [1]. Co-utilization of heterogeneous sugars in LCB hydrolysates still remains an unresolved issue in the production of LCB-derived ethanol. Since cellular processes involved in sugar consumption are complex, more systematic engineering of various factors such as transporters, regulatory mechanisms of catabolites and cellular responses to stress caused by intermediates and products would be required. Improved sugar utilization capability will benefit not only ethanol production from LCB but also the production of advanced biofuels from LCB, which will be discussed in the following section.
Metabolic Pathway and Host Engineering for Advanced Biofuels Production
Although ethanol is the most widely produced biofuel together with biodiesels, ethanol is not an ideal alternative fuel or blending fuel due to its low energy content (only about 70% of gasoline) and hygroscopic nature [19]. As a result, there has been significant demand for advanced "drop-in" biofuels that have better fuel properties and are compatible with the current engines and infrastructure.
Good alternative transportation fuels would have similar chemical structures and properties to those found in existing transportation fuels (gasoline, diesel, and jet fuels). Metabolic pathways that produce desirable fuel-like molecules have been engineered: fermentative alcohols (butanol and isobutanol), nonfermentative alcohols from 2-keto acid pathways, fatty acids-derived fuels and isoprenoid-derived fuels.
The general overview of the pathways for advanced biofuel production is summarized in Figure 3 and Table 1.
Fermentative Pathways for 1-Butanol and Other Short Chain Alcohols
1-butanol is naturally produced by species of Clostridia, which have innate 1-butanol fermentative pathways. The 1-butanol fermentation pathway and general aspects of Clostridia physiology (e.g., sporulation cycle and acidogenesis) were summarized along with metabolic engineering efforts to improve 1-butanol fermentation in the recent review [9]. Other specific aspects regarding 1-butanol fermentation by Clostridia have also been reviewed with a particular focus on the diversity of Clostridia strains for ABE (Acetone-Butanol-Ethanol) fermentation and 1-butanol fermentation on various substrates [31]. Metabolic engineering strategies to overcome limitations in Clostridia butanol fermentation are (i) redox balancing (e.g., regeneration of NADH via butanol fermentation, which could be increased by reducing hydrogenase activity [9]); (ii) reducing byproduct formation and improving 1-butanol productivity; and (iii) conferring host tolerance against 1-butanol [32].
Although Clostridia species are natural 1-butanol producers, they grow slowly and their genetic manipulation is still limited. To overcome these limitations, the 1-butanol fermentation pathway was re-constructed in E. coli by incorporating seven enzymes from three different species [33]. One of the key engineering strategies was substituting a flavin-dependent native Bcd/EtfAB system of Clostridia to an irreversible enoyl-CoA reductase (Ter) from Treponema denticola [34]. This Ter-based, synthetic pathway has been further improved by building up NADH and acetyl-CoA as a driving force for 1-butanol production under anaerobic conditions. Accumulation of NADH and acetyl-CoA could be achieved by eliminating four fermentation pathways and by expressing formate dehydrogenase (fdh1), and as a result, the titer could reach up to 30 g/L in E. coli [20]. Another study optimized transcription level of fdh1 to minimize redox imbalance by excessive regeneration of NADH, and it subsequently showed increased 1-butanol productivity in E. coli [35]. Furthermore, expression of acrB efflux pump conferred tolerance of E. coli to 1-butanol [36]. Not only 1-butanol, but isopropanol can be produced via fermentation by reducing acetone, one of the three fermentation products in Clostridium. The highest titer of up to 143 g/L was achieved with gas stripping via fermentation pathway [37].
Non-Fermentative Pathways for Short Chain Alcohols: 2-Keto Acid Pathway
Short-chain alcohols can be produced via non-fermentative pathways such as the 2-keto acid pathways, and the latest advances in production titer and engineering approaches can be found in recent review papers [38,39]. In these pathways, 2-keto-acid intermediates are transformed to corresponding aldehydes and subsequently to alcohols by decarboxylases and alcohol dehydrogenases, respectively [40] (Figure 4). 1-butanol and branched-chain alcohols such as isobutanol (C4) and isopentanols (C5) are produced from keto acids intermediates of valine and leucine biosynthesis pathways by increasing availability of specific keto acids and by expression of promiscuous keto acid decarboxylase (KivD) from Lactococcus lactis with alcohol dehydrogenase 2 (Adh2) from S. cerevisiae [38]. Subsequently, the highest isobutanol production titer from keto acid pathways has been achieved up to 50 g/L in E. coli with gas stripping [21]. Linear alcohols ranging from 1-pentanol (C5) to 1-octanol (C8) were also produced from threonine over-producing E. coli [41]. In this paper, the carbon chain of 2-keto acids was recursively elongated by an engineered leucine synthesis pathway from E. coli (EcLeuABCD). The production titer was about 1.4 g/L [41]. Alternative hosts such as amino acid overproducing Corynebacterium glutamicum and more isobutanol-tolerant B. subtilis have been also used, and mitochondrial targeting and expression of cytoplasmic Ehrlich pathway enzymes improved isobutanol production by 2.6-fold in yeast [42].
Recently, Tseng and colleagues demonstrated an alternative pathway to produce odd-carbon chemicals such as pentanol more efficiently with higher theoretical yield in E. coli by assembling different pathways modularly [43].
Short-chain alcohols have been produced via various metabolic pathways and to relatively higher titers than other advanced biofuel molecules. Even though they are considered to have better fuel properties than ethanol, there has still been an increasing demand for biofuel molecules with better properties such as higher energy content and lower freezing points. To address this issue, microbial hosts have been engineered to produce biofuel molecules with longer hydrocarbon chains and more branching methyl groups via various metabolic pathways such as fatty acid metabolic pathways (Section 4.3.) and isoprenoid pathways (Section 4.4.).
Fatty Acid-Based Biofuels
The energy-rich hydrocarbon chains of fatty acids make them potential precursors for the production of diesel alternatives. Fatty acids are synthesized by fatty acid synthase (FAS) system, which condenses malonyl-CoAs into various lengths of fatty acyl esters with acyl-carrier protein (ACP) ( Figure 5). Currently, plant oils and animal fats are chemically converted to fatty acid alkyl esters (fatty acids ethyl esters, FAEEs and fatty acids methyl esters, FAMEs) via trans-esterification. Microbial production of FAEEs was facilitated by identification of promiscuous wax ester synthase/acyl-CoA:diacylglycerol acyltransferase (WS/DGAT), which was first characterized in Acinetobacter calcoaceticus ADP1 [44], and 1.28 g/L FAEE was produced with oleic acids feeding in E. coli strain where pyruvate decarboxylase (Pdc) and alcohol dehydrogenase (AdhB) from Z. mobilis were co-expressed to provide ethanol [45]. Conversion of sugars to FAEEs, fatty acid alcohols and wax esters without extracellular feeding of fatty acids or ethanol was demonstrated by expression of pdc, adhB, tesA' (membrane targeting sequence truncated E. coli native thioesterase) and fadD (native acyl-CoA synthetase) in E. coli with fadE gene deletion (ΔfadE), where fatty acid metabolism was forced to produce more fatty acyl-CoA [46], an important precursor for various fatty acid-derived fuels. The introduction of a dynamic sensor-regulator system significantly increased fatty acid production several times higher in E. coli by balancing substrate supply levels [28], which resulted in a FAEE titer of 1.5 g/L with 28% of maximum theoretical yield (Table 1). Fatty acid-derived fuels were also produced in yeast by overexpressing all three fatty acid biosynthesis genes (ACC1, FAS1 and FAS2) in combination with the expression of downstream enzymes (diacylglycerol acyltransferase, fatty acyl-CoA thioesterase, fatty acyl-CoA reductase, and wax ester synthase) [47]. Oleaginous yeast species are now extensively studied as promising hosts for FAEE production since they naturally produce and accumulate a large amount of lipids-up to 36% of their dry weights. Recent studies increased the intracellular lipid content even further, up to 40%-70%, by engineering ex novo lipids biosynthesis, and it has been shown that lipids could be accumulated via tightly regulated de novo synthesis pathway when acetyl-CoA carboxylase (ACC1) and diacylglycerol acyltransferase (DGA1) were overexpressed [48].
Other fatty-acid-derived biofuel molecules such as fatty aldehydes, methyl ketones, alkanes and alkenes are now produced in microbial hosts ( Figure 5), and production of those fatty acid-derived fuels were reviewed recently [49,50]. The most significant progress has been made by identifying biochemical pathways and responsible enzymes to produce targeted fuel molecules from various organisms and by reconstruction of the heterologous pathway in selected hosts. For example, heterologous expression of acyl-ACP reductase and aldehyde deformylase [51] (previously known as a fatty aldehyde decarbonylase) enabled E. coli to produce alkanes and alkenes [52]. Furthermore, two terminal alkene synthesis enzymes were discovered for production of α-olefins; elongase decarboxylase (a homologous enzyme to type I polyketide synthase) from Synechococcus sp. PCC7002 [53] and fatty acid decarboxylase (a cytochrome P450, OleTJE) from Jeotgalicoccus species [54], which act on acyl-ACP and fatty acids, respectively. In addition to understanding the biochemistry and structure of the identified enzymes, finding alternative enzymes to reduce fatty acids pathway intermediates to fuel molecules have been of great interest to the research community. For example, NADPH-dependent fatty aldehyde reductase from Marinobacter aquaeolei VT8 exhibited reducing activity for both acyl-CoAs and subsequently produced aldehydes [55], which suggests that expression of this enzyme possibly reduces accumulation of toxic aldehyde intermediates [49]. Lastly, the methyl ketone pathway has been engineered in an E. coli strain with fadA gene deletion by overexpression of β-ketoacyl-ACP synthase II (FadB), an acyl-CoA oxidase from Micrococcus luteus, and a thioesterase (FadM) to produce β-keto fatty acids followed by hydrolysis and decarboxylation to ketones [29,56].
Since fatty acids are the primary precursors, many engineering approaches have focused on increasing available fatty acids [39]. Those approaches include overexpression of thioesterases, blocking fatty acids degradation via β-oxidation, genomic modification to increase metabolic flux to malonyl-CoAs and balancing FA pathway intermediates by sensor-regulator and FA synthase subunits. In addition, novel and synthetic pathways for fatty acid production were proposed. Most significantly, fatty acids were over-produced by reversing β-oxidation while avoiding ATP consumption and tight regulation related to acetyl-CoA carboxylase, which produces the FA synthesis precursor, malonyl-CoA [57]. Using this synthetic pathway, fatty acids were produced up to ~7 g/L (up to 80% of its theoretical maximum yield) [57]. In another study, the carbon chain lengths of fatty-acid-derived fuels were extended in the range from C6 to C18 in E. coli by introducing alternative carboxylic acids reductase from Mycobacterium marinum [58]. Furthermore, it was reported that the composition of fatty acid-derived biofuels could mimic that of diesel or aviation fuel when free fatty acid pools were modified to have iso-branched fatty acids and alkanes in E. coli [59]. Composition of iso-branched fatty acids could be further increased to 20% of the total fatty acids by expression of biosynthesis genes for threonine and isoleucine, although the total titer of fatty acids was decreased [60].
Isoprenoid-Based Biofuels
Isoprenoids are a group of diverse chemical compounds which includes over 50,000 compounds. Fuel molecules derived from isoprenoid pathways have branched hydrocarbon chains, which lower their freezing temperature, as well as various ring structures, which makes them potential alternatives to diesel and jet fuels [61]. These branched hydrocarbon chains are derived from two universal precursors, isopentenyl diphosphate (IPP) and its isomer, dimethylallyl diphosphate (DMAPP) (Figure 6). The isoprene unit of IPP is first condensed to DMAPP and iteratively further condensed to various length of prenyl diphosphate molecules such as geranyl diphosphate (C10, GPP), farnesyl diphosphate (C15, FPP) and geranylgeranyl diphosphate (C20, GGPP) [62]. These condensation reactions are catalyzed by prenyltransferase enzymes, of which specificity is the determining factor of hydrocarbon length of the relevant isoprenoid [63]. Finally, prenyl diphosphates are diversified to various structures of isoprenoids by terpene synthase enzymes, mostly via carbocation formation [64]. Regardless of the final structures, isoprenoids are primarily classified into different groups based on the length of the hydrocarbon backbone, and the most abundant isoprenoids belong to monoterpenes (C10), sesquiterpenes (C15) and diterpenes (C20). Therefore, engineering efforts to produce isoprenoids-derived biofuels have been focusing on increasing production of prenyl diphosphate intermediates, particularly IPP and DMAPP, and identification and engineering of terpene synthases for desired enzymatic activity.
Although two different biosynthetic pathways (the mevalonate (MVA) pathway and methylerythritol phosphate (MEP) pathway) have been known to produce two universal C5 precursors, the MVA pathway has been more extensively exploited for production of potential diesel and jet fuel precursors such as farnesene [65], bisabolene [66,67], pinene [27,68] and limonene [26,69] (Figure 6). Since isoprenoid-derived fuels are dependent on the two precursors, IPP and DMAPP, most engineering efforts are focused on optimizing pathways by balancing fluxes of metabolic intermediates, increasing the transcription level of limiting enzymes [70], improving protein expression by codon-optimization, and reducing reversibility of the reactions [71].
Among sesquiterpene-derived fuels, biological production of farnesene and bisabolene has been reported. Farnesene is converted from FPP by farnesene synthase. The previously developed FPP-overproducing E. coli and yeast strains led to an initial farnesene titer of 1.1 g/L after 120 h of E. coli fermentation and 728 mg/L after 72 h of yeast fermentation [72,73]. After continuous evolution of the host strains, Amyris, a biotech company, reported farnesene production to the titer of 104.3 g/L from the engineered yeast strain [74] and commercialized trans-β-farnesene under the name, "Biofene®". Bisabolenes are another group of sesquiterpenes, of which fuel properties are qualified for diesel alternatives [67]. Initially, a titer of 400-800 mg/L of bisabolene was produced from E. coli and yeast platforms engineered for FPP-overproduction [70,75]. About 40% increase in titer was reported in E. coli by applying principal component analysis of proteomics (PCAP) approach [26]. In yeast, three genes that encode two unknown functions and a transcriptional regulator, Rox1, were identified from carotenoid-based screening method, and further engineering efforts resulted in 5.2 g/L bisabolene titer in fed-batch fermentation conditions [66]. Microbial production of monoterpenes was significantly improved by increasing availability of GPP and protein expression level of terpene synthases. For example, production of limonene, a promising diesel fuel precursor [76], has been significantly improved by introducing a heterologous MVA pathway and optimizing GPP synthase from Mentha spicata and limonene synthase from Abies grandis [69]. A step-by-step optimization led to over 100-fold titer increase from the initially reported titer using the MEP pathway [77] up to 450 mg/L and further engineering using proteomics analysis of the pathway enzymes improved the titer about 40% [26]. For a jet-fuel precursor pinene production, initial titer in E. coli (5.44 mg/L in flasks [78]) has been significantly improved by combinatorial fusion of GPP synthase and pinene synthase, reporting the highest titer of 32 mg/L when GPP synthase and pinene synthase from A. grandis were co-expressed as a fused protein [27]. In addition to limonene and pinene, microbial production of sabinene was also reported with the titer of 82.18 mg/L [79].
Production of isoprenoid-derived alcohols were achieved by co-expression of phosphatases: isopentenol [23,[80][81][82], geraniol [83] and farnesol [73]. Particularly, isopentenol, a promising biofuel and a precursor for commodity chemicals such as isoprene, was produced at a titer of 2.2 g/L from 10 g/L glucose (70% of apparent theoretical yield) [23]. This significant improvement in isopentenol production has been achieved by the "fine-tuning" of the upstream MVA pathway [81] and the increased availability of NudB, which is required for hydrolysis of IPP into isopentenol [23].
Advanced Biofuels Production from LCB-Derived Sugars or Hydrolysates
Biofuel production from LCB has been primarily limited by pretreatment and saccharification of LCB, which determines sugar yields, and by inefficient co-utilization of hexose and pentose of host strains. While ethanol production has been more frequently chosen as a representative pathway to demonstrate biofuel production from LCB-derived sugars, a few studies on advanced biofuels production from LCB-derived sugars have been pursued recently [84]. One of the early studies demonstrated the engineering of E. coli as a microbial factory for consolidated bioprocesses (CBP) by expressing enzymes required for both biomass degradation (cellulase, xylanase, β-glucosidase, and xylobiosidase) and biofuel synthesis (FAEE, butanol and pinene) [68]. Using this engineered E. coli strain, 71 mg/L of FAEE, 28 mg/L butanol, and 1.7 mg/L pinene were produced from 5.5%, 3.3% and 3.9% w/v IL-treated switchgrass, respectively. Although these titers need to be further improved for industrial application, it should be noted that 71 mg/L of FAEE was 80% of the estimated yield from 5.5% switchgrass that could release only 0.14% glucose and 0.14% xylose by the cellulose and the xylanase expressed from the engineered E. coli strain [68]. A recent study demonstrated simultaneous isopentenol fermentation and saccharification of ionic liquid (IL)-pretreated pellets containing a mixture of four feedstocks [85]. The IL-pretreated pellet released 7 g/L glucose in 48 h, and almost 1 g/L of isopentenol was produced out of this hydrolysate. In another study, E. coli was engineered to produce isobutanol from xylose, by integrating the isobutanol synthetic pathway and xylose utilization genes into the genome and using xylose as an inducer for the expression of these genes [86]. In this study, a titer of 3.6 g/L isobutanol was produced from cedar hydrolysates containing 86.4 g/L glucose and 15.5 g/L xylose although the productivity was 4.5 times lower than media containing pure glucose and xylose [86]. In addition, higher alcohols have been produced in Corynebacterium crenatum via keto acid pathways using acid-pretreated hydrolysates of duckweed [87]. In this work, heterologous genes involved in isoleucine, leucine and valine biosynthesis pathways from S. cerevisiae were expressed in C. creanatum, and 982 mg/L of 2-methyl-1-butanol, ~1.1 g/L isobutanol and ~685 mg/L of 3-methyl-1-butanol were produced from acid-pretreated hydrolysates of duckweed containing 60 g/L glucose without compromising productivity [87]. Although raw hydrolysates were not used as a carbon source, Avicel hydrolysates containing cellobionic acid, one of the major components of lignocellulosic biomass, were also used to produce 1.4 g/L of isobutanol achieving 36% of the theoretical maximum with a productivity of 0.03 g/L/h by expressing a native gene, ascB encoding 6-phospho-β-glucosidase [88]. Even though most studies showed that productivity was relatively reduced when biofuels were produced from LCB-derived hydrolysates or sugars, these results suggested that production of advanced biofuels from LCB-derived sugars is currently feasible, and it could be further improved by overcoming limitations that are not intrinsic to the engineered biofuel pathways and by further optimization of the responsible metabolic pathways.
Conclusions
Ethanol has been produced from various carbon sources (from corn-and sugarcane-based glucose to lignocellulosic biomass) by engineering or by exploiting native fermentation pathways of various microbial hosts. Production of higher alcohols (1-butanol, isobutanol, isopentanol, etc.) as alternatives to ethanol with better fuel properties has been demonstrated by engineering fermentative pathways, non-fermentative keto-acid pathways, and isoprenoid pathways. In addition to higher alcohols, fatty-acid-derived biofuels and isoprenoids-derived biofuels have also been proposed as good diesel alternatives. Various microbial hosts and metabolic pathways were explored extensively to improve yield, titer, and productivity using various strategies. It would be hard to establish a common strategy that works for all kinds of biofuels derived from various metabolic pathways. However, more systemic and more collective efforts would be required in the future to overcome several bottlenecks mentioned in this review, such as extended sugar utilization capability, robustness of microbial hosts against general stresses and toxic products, and scale-up and actual commercialization of advanced biofuels. These bottlenecks are related to the general physiology of microbial hosts rather than to any specific metabolic pathways. Metabolic pathway engineering in addition to improving general physiology of candidate biofuel producers would allow more economically viable biofuel production, which will reduce the heavy dependence on petroleum-based fuel and contribute to slowing down global warming by providing carbon-neutral energy for the transportation sector. | 2016-03-14T22:51:50.573Z | 2015-10-27T00:00:00.000 | {
"year": 2015,
"sha1": "aa924b14e217811a6703c8d78c9e1199c7674ab3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5354/2/4/184/pdf?version=1445954876",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5deeb9cc8c1634b44f1185f3f9eeb69e1da17d69",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
271479201 | pes2o/s2orc | v3-fos-license | Unconventional banking and poverty reduction: A regression analysis with policy recommendations for Hail
The present study addresses a significant omission in the literature by examining the role of non-traditional financial organizations in poverty reduction. Previous research has primarily focused on the direct impact of commercial banks on poverty, neglecting the contribution of other types of financial institutions. This research posited that Islamic banks are anticipated to assume a pivotal function in mitigating poverty; nonetheless, more studies on the topic need to be conducted. This research used the Partial Least Squares Regression technique to investigate the impact of Islamic banks on poverty alleviation in several developing nations from 2013 to 2019. The empirical data demonstrates a direct correlation between the overall assets of Islamic banks and some evident characteristics. The heightened acquisition of holdings by Islamic banks indicates a decrease in poverty. Furthermore, a statistically significant correlation exists between the legal framework and poverty alleviation. It is crucial to acknowledge that the research is limited by the intrinsic limits imposed by the accessible information on Islamic banks and the specific temporal limitations of the study. Future academic research should aim to increase the sample size and cultural and social factors and expand the time frame to understand the intricate function of Islamic banks in reducing poverty entirely. This paper is helpful to many policymakers in developing countries, including Hail development plans in Saudi Arabia.
Direct contribution of banks in poverty alleviation
Banks' contribution to poverty reduction has garnered significant attention from scholars, [1][2][3][4].Banks provide financial services that empower individuals across various income categories to overcome economic challenges.Firstly, banks offer avenues to savings for the impoverished, which, when accumulated, may lead to the establishment of their businesses, thus improving their living standards.Moreover, the loans offered by banks are essential in elevating individuals' income levels as they address their financial difficulties.Hence, banks have a direct impact and endeavor to mitigate poverty, [5].However, the quantification of banks' roles in alleviating poverty is debatable.We, the authors of this paper, have found that banks are considered a part of financial and economic development when exploring their role in poverty alleviation, [5,6].Capitalization, microfinance institutions, non-banking institutions, banking institutions, and private credit are proxies for financial development.In contrast, economic growth is assessed via Gross Domestic Product, GDP growth rate, or inflation.Banks, in general, play a vital role in boosting economic development; they also give employment possibilities, loans to start company ventures, tax revenue, and charitable contributions to the disadvantaged.The development of the financial sector helps to alleviate poverty, [7].Financial systems generally serve as a mechanism for mobilizing savings, allocating capital funds, monitoring the usage of money, and controlling risk in support of the economic development process, [8].
On the contrary, recently, scholars have been interested in the environment of non-traditional banking, (R. A. [9][10][11]).The recorded evidence highlights the beneficial impact of micro-finance organizations in alleviating poverty, [12,13].Nevertheless, there is a lack of quantification in research examining the impact of Islamic banking on poverty reduction since many of these studies have primarily focused on the setting of conventional banking [14][15][16], They attempted to examine the roles of banks in poverty alleviation empirically, but they only focused on conventional banks, ignoring the significant role that Islamic banks play in poverty alleviation, especially those in Islamic states that prefer not to deal with traditional banks, [11,17].
The key differentiating factor between Islamic and conventional banks is the rejection of interest-based operations in commerce, which are seen as harmful to society.Due to their negative impact on individuals and institutions, Islamic Sharia forbids practices such as gambling, smoking, and drinking [11].Islamic banking adheres to specific principles and regulations.It encompasses different forms such as Mudarabah, which involves sharing profits and losses in business ventures, Wadiah for safekeeping, Ijar for leasing, Musharakah for joint ventures, Murabahah for cost plus finance, Hawala for transfers, Takaful for Islamic insurance, and Sukuk for Islamic bonds.During these activities, the bank may serve as a loan provider and a business partner; see Fig. 1.
The 1998 crisis confirmed that Islamic banks have a higher intermediation ratio and asset quality and are better capitalized than conventional banks [18].The research examines the financial development of Islamic banks in terms of their total assets, economic growth in terms of their gross domestic product (GDP), and the poverty rate in each nation.The research also focused on the country's legal background.Since only a small amount of empirical research has been undertaken on the roles of Islamic banks in economic development and poverty reduction, it appears worthwhile to explore the influence of Islamic banks on poverty reduction in a few countries.
It is worth noting that exploring the unconventional banking function is essential since prior studies have yet to address this area adequately, [19][20][21].Additionally, the current research contributes to its authenticity by considering the effect of legal origin, namely the rule of law, [22][23][24].Indeed, a few research investigations have shown the significant role of good governance in reducing poverty in developing countries, [22,24].These studies have found a beneficial influence on poverty alleviation, particularly regarding factors such as the frequency of meetings, audits, supervision, domestic legislation, and adherence to a code of ethics, (R. [25]).Hence, the legal origin plays a significant role, especially in the context of non-traditional banks in developing countries.
Importance, objectives, and the problem statement
This study is necessary due to various facts.Firstly, it proposes a new dimension to exploring the contribution of non-traditional banks by using Islamic banks' total assets as a unique construct.Secondly, the study chose the context of developing countries as poverty reduction is a priority for many governments, particularly in the Middle East and Africa, where there are high levels of poverty.
The objective of the current study is to investigate the relationship between the total assets of Islamic banks and poverty reduction to determine the extent to which these banks' financial performance contributes to alleviating poverty.The research questions are as follows.
I. How does economic growth independently contribute to poverty reduction, and how does it interact with the total assets of Islamic banks in influencing poverty levels?II.To what extent do legal origins influence the effectiveness of Islamic banks in reducing poverty?III.What are the critical policy implications derived from the findings, and how can policymakers leverage the insights to enhance the impact of Islamic banks on poverty reduction, considering the interplay of economic growth and legal origin?
This research is necessary for several methodological and contextual reasons.First of all, using the total assets of Islamic banks as a distinct construct adds a fresh perspective to the investigation of the role played by non-traditional banks.This strategy is novel because it distinguishes Islamic banks from other kinds of banks, such as commercial banks, and offers a more accurate way to gauge their success.Second, because many governments continue to prioritize reducing poverty-especially in the Middle East and Africa, where high poverty rates are standard-the emphasis on emerging nations is essential.Policymakers may benefit significantly from knowing Islamic banks' roles in these areas.Thirdly, given their distinct legal roots, there is a sizable vacuum in quantitative research carried out on the contribution that Islamic banks make to fight against poverty.This research closes the knowledge gap by presenting actual data on the relationship between poverty alleviation and Islamic banks' financial successes.Finally, the research seeks to identify important policy ramifications that governments and financial regulators may use to better capitalize on the distinctive qualities of Islamic banks and lessen poverty.Through an analysis of the relationship between Islamic bank performance, legal origin and economic growth, this paper offers practical recommendations for enhancing financial inclusion and economic development in emerging nations.
This paper is appealing because of the way it measures the impact of Islamic banks on reducing poverty.Through an analysis of the overall assets of Islamic banks and their relationship to legal roots and economic development, the study offers fresh perspectives not thoroughly covered in other studies.The study provides a nuanced understanding of how various regulatory environments affect the performance of Islamic banks in reducing poverty.It also offers policymakers practical recommendations for designing more effective strategies for leveraging Islamic banks to combat poverty.These contributions include adding a new dimension to the field of Islamic banking research by using the total assets of Islamic banks as a key variable in assessing their impact on poverty reduction.Furthermore, the research closes a gap in the literature by providing actual data on the connection between Islamic banks' financial success and poverty reduction in developing nations.It advances the conversation on financial inclusion and economic growth.
Poverty reduction in developing countries
Developing countries are racing toward poverty alleviation, China has shown positive performance towards poverty alleviation, and many developing countries have shown improvement in poverty numbers, [26].According to current studies [27,28], China achieved the most significant poverty reduction internationally by lifting 738 million people out of poverty from 1990 to 2017.Vietnam has decreased its poverty rate from 53.3 % in 1993 to 3.2 % in 2018 by primarily focusing on enhancing agricultural output, implementing market reforms, and implementing specific social programs.Indonesia significantly reduced poverty rates, dropping from 61.7 % in 1991 to 9.2 % in 2018.India successfully alleviated poverty for 271 million individuals from 2005/6 to 2019/21, primarily due to economic expansion, job creation measures, and rural development projects.Bangladesh successfully decreased its poverty rate from 53.2 % in 1993 to 21.0 % in 2016 via economic development, investments in education and healthcare, and the implementation of microfinance programs.Ethiopia has made significant strides in decreasing its poverty rate, which dropped from 44.2 % in 1996 to 23.2 % in 2019.This achievement may be mainly attributed to advancements in agriculture, investments in healthcare and education, and the implementation of social security programs.Ghana reduced its poverty rate from 56.7 % in 1991/92 to 23.4 % in 2016/17.This positive change was driven by economic development, policies prioritizing the well-being of the poor, and investment in human resources.Rwanda has made remarkable strides in decreasing its poverty rate, which dropped from 59.7 % in 2005/06 to 35.5 % in 2016/17.This achievement may be attributed to the country's focus on rural development, investments in agriculture, and the implementation of social protection programs, see Table 1.However, this record of accomplishment has yet to be consistent.Several nations have seen minor alleviation in poverty or even a rise in poverty levels, see Table 1.The underwhelming result might be partially attributed to the lackluster economic development seen by most African nations throughout the 1980s and early 2020s.Significant economic disparity, consistently seen in several Latin American nations throughout time, maybe an essential factor.Therefore, substantial concerns have been raised about the driving force behind poverty reduction in developing countries.Income growth has been the main driving force for the decrease in the average and rise in poverty.The analysis, however, reveals significant variations across regions and countries concealed by the overarching narrative of hegemonic development.Although growth was the primary driver of poverty reduction or increase in most countries [26], the poverty rate significantly influenced poverty dynamics in many nations.
In conclusion, while the evidence indicates substantial advancement in alleviating poverty in developing countries, much work still needs to be done.To achieve the Sustainable Development Goal of eradicating poverty by 2030, it is crucial to adopt a comprehensive strategy that tackles the underlying causes of the poverty rate, focuses on inclusive and sustainable economic growth, empowers local communities through specific interventions, and utilizes innovative financing methods.Only by doing so can policymakers accurately depict a comprehensive portrayal of a society in which collective wealth is actualized for every individual.
Underlining theories
To comprehend the factors contributing to poverty reduction, it is essential to conduct our study using a sound theoretical foundation.This foundation should be built upon frameworks that outline the role of Islamic banks in poverty reduction.Islamic banks start the process by allocating a portion of their overall assets as legally mandated contributions.This sum of money is then allocated to charitable causes and used to eradicate poverty, [11,29].Taking a bank's total assets into account reflects, in some way or another, the contribution of specific Islamic banks to the poverty reduction effort, (Mohseni-Cheraghlou, 2017).Society represented in bank customers is regarded as one of the stakeholders of Islamic banks; see Fig. 2. Islamic banks as businesses are more suitable with stakeholders' theory [30].The importance of Islamic banks having Corporate Social Responsibility (CSR) explains the reasons behind the selection of Islamic banks' total assets as an indicator of their contribution to poverty reduction.
Islamic banks are required by law to give a certain amount of their total assets to charity projects that strive to eliminate poverty.This approach is rooted in the principles of Islamic finance, which prioritize social justice and economic equality.By considering banks' overall assets, we may assess the level of impact that certain Islamic institutions have on reducing poverty.This is consistent with stakeholder theory, which implies that stakeholders contribute to a firm's performance and advocate for enterprises to prioritize the interests of all their stakeholders, including the wider community, [31].The significance of Islamic banks' CSR endeavors is vital to their operating philosophy.Islamic banks provide money from their holdings to diverse humanitarian activities, including those specifically targeted at mitigating poverty.By including these contributions into their overall assets, Islamic banks showcase their dedication to the betterment of society, satisfying their obligations to stakeholders as prescribed by Stakeholder Theory.This theoretical framework offers a solid basis for our research, connecting the overall resources of Islamic banks to initiatives aimed at reducing poverty.Incorporating total assets as a metric in our model is warranted due to the banks' involvement in CSR and their responsibility to contribute to societal initiatives.This relationship guarantees that our study is based on thoroughly comprehending the banks' dual function as financial entities and catalysts for societal transformation.This study focuses on the relationship between the total assets of Islamic banks and their impact on poverty reduction.We want to demonstrate the practical implications of Stakeholder Theory within the framework of Islamic finance and social responsibility.Furthermore, the legitimacy theory debates the position of corporations from a general societal perspective; it defines an underlying compact between the company and society to satisfy society's expectations to validate the company's existence, [32].As a result, Islamic banks with higher expectations of societal development and well-being must provide more information on CSR.Furthermore, Islamic banks assert that they are founded on Islamic principles and values.This is an ideal assumption; the immanent critique contends that ideal assumptions vary from reality.Nevertheless, Islamic banks as corporations create a poverty alleviation contribution through entrepreneurship construction, corporate social responsivity, and charity integration [32,33].In relation to the importance of examining the legal origin's influence on the relationship between bank performance and poverty reduction, we based our analysis on Legal Origin theory, [34].This idea asserts that a nation's legal traditions and institutions substantially impact several facets of its economic and financial progress, [30,34,35].Legal scholars frequently categorize legal traditions based on their historical origins, with two notable classifications being the Common Law tradition and the Civil Law tradition, [34,36].
Financial development, microfinance, Islamic banks, and poverty
Financial development is believed to play a vital role in poverty reduction through the financial services offered in the banking sector [37], which fosters growth that results in poverty reduction, (Beck et al., 2008).When a wide range of financial products and services are offered to the excluded people through conventional banking, their income will grow, and poverty will be minimized [7].
While researching this, it was found that a positive relationship exists between financial development and economic growth [38].As well as this, a positive relationship between the average income of the population and the income of the poor [39][40][41][42] or access by the poor to the financial services, represented their ability to access the loans easily [43,44].According to Zhuang [45], the more the poor access financial services, the better their chances are.However, not all the studies supported the idea of a positive relationship between financial development, economic growth, and poverty reduction, and this has been proved by linking the stock market and bank development to the income level of the poor [46].The studies reported a significant effect in reducing the poverty rate in developing countries, [7,37].The review further reported these assets.Different methods measure financial development, such as credit, gross domestic product, and bank assets.Poverty can also be measured in other ways, such as the poverty rate, poverty gap, and microfinance institutions; the methods of analysis are ordinary least square, two-stage least square, and GMM, [47][48][49].
Furthermore, previous research focused on the impact of microfinance on poverty using Islamic banks as financial development indicators, [5,50].Indeed, compared to Grameen bank microcredit respondents, Islamic bank microcredit respondents have a more extensive history of providing credit for income-generating activities to decrease poverty, [51].Islamic charity effectively alleviates short-and long-term poverty, especially by integrating Islamic social and commercial financing into a unified paradigm, [52].Islamic finance mitigates short-and long-term poverty within the Indonesian setting using the autoregressive distributed lag method, a method used to analyse the relationship between bank performance and poverty reduction, [10].
Islamic microfinance institutions act as a solution for meeting the poor's needs by providing them with the required financial products and services that increase their income without harming or increasing their debts.Islamic financial institutions play a significant role in funding the deficit units from the surplus units in the economic system, eventually leading to more economic activities and reducing poverty.The basic principle of Islamic finance does not rely on the interest rate as in conventional banking; it is based on profit and loss sharing.It also does not allow unfair practices or set a predetermined rate for investment return.This reduces the financial burden on the individuals, particularly the poor ones, and allows for sharing loss if incurred or profit if received.Islamic banks are rapidly growing; they have recorded a high growth rate in size and number and operate in about 60 countries worldwide.Islamic banking is predicted to control over 50 % of the savings in Islamic countries within the next decade [53].Many economic ills, such as poverty, inequalities of income and wealth, economic instability, social and economic injustice, and inflation of monetary assets, are all in conflict with the value system of Islam [54].Islamic finance is directed toward full employment, economic well-being, equal distribution of income, and growth of the economy as this shall lead to socio-economic justice, more savings, and productive mobilization and stability in the value of money (Chapra, 2000, cited in Ref. [55]).
In short, Islamic financial institutions do not work based on the principle of profit maximization; their system is integrated with moral and ethical values that aim to make a social change in society.They also do not depend on the supply of tangible collateral, leading to better income and wealth distribution.Integrating moral values in their system allows poor people access to income and greatly benefits social justice and long-term growth, [56].
However, total assets might more accurately reflect the role of Islamic banks.To conclude, we have chosen Islamic banks' total assets to indicate their contribution towards poverty alleviation.Based on the above discussion, it seems interesting to investigate the role of Islamic banks in reducing poverty and improving the standard of living.Accordingly, the following hypothesis is developed.
H01.The total assets of Islamic banks positively impact poverty mitigation in developing countries.
Economic factors and poverty
Poverty has always been categorized as a multifaceted phenomenon challenging the growth and development of economies worldwide, particularly in developing countries.It is not confined to only income shortage; it may also include poor education, malnutrition, housing and shelter.Poverty is a complex issue requiring a joint effort of various sectors in each country to mitigate it.In cooperation with international organizations, local governments have attempted to minimize the poverty rate among their citizens by implementing different policies and economic reforms to meet these goals.
However, despite this effort, it is still reported that there are about 900 million people globally trapped in the poverty cycle with S. Mohammed et al. extreme poverty status.One of the strategies for poverty reduction is the concentration on economic growth and development, as it is believed to function as a driver and a good tool for mitigating poverty.Economic growth might help in poverty reduction, but it should be noted that it is not the only tool for cutting the poverty rate to below 3 % by 2030 unless it has been accompanied by necessary policies that guarantee the maximum benefits of the job creation process and growth [57].It is of utmost importance to policymakers since it deals with population welfare [58].Furthermore, it may result in more sophisticated issues such as misallocation of resources, malnutrition and an increase in fertility.Previous investigations reported that economic growth plays a crucial role in the diminishing incidence of poverty [26].Also, continuous economic reforms are a necessary step in the process of poverty lessening, similar to the 1990 Indian economic reform [59].Economic growth is said to achieve its goals when it allows marginalized people to access distinct financial services and use them to establish their income-generating activities, ultimately reducing poverty.This task is conducted by microfinance institutions and Islamic ones, too.The required credit for poor individuals should be provided effortlessly with low interest rates to allow them to benefit from tiny loans.Moreover, many previous studies revealed that economic growth development can reduce poverty (R. et al., 2004; R. et al., 2003; [60]).
The fixed costs and indivisibilities also prevent poor people from receiving the required cash, limiting their growth.Poverty dampens the growth of the economy with market imperfections and failure, fixed costs and strategic complementarities [61].Thus, once the economy develops and market imperfection lessons, access to capital in various forms may be expected to widen, and inequalities will begin to lessen with continued economic growth [62].
A state is reported to have excellent and efficient economic growth when its growth is affected by a change in the people's income distribution.This is because wealth distribution is an integral part of the relationship between economic growth and poverty reduction; thus, it is essential to look at it while measuring the effectiveness of economic growth.Economic growth needs to focus on areas where most people live, such as the rural areas, meaning any development should incorporate initial education and necessary infrastructure for bettering the rural poor.The mere concentration on the per capita income will not be a successful strategy for poverty reduction.For a given rate of growth, the extent of poverty reduction relies on how the income distribution changes with changes in growth and on initial inequalities in income, assets and access to opportunities to allow the poor to share the growth [63].The existence of a high percentage of poor individuals is an indicator of poor economic growth [64].Inequalities in wealth imply inequalities in access to productive assets, resulting in the underutilization of the productive potential of the poor (Ferreira, 1999).
From the above discussion, economic growth is a necessary tool for poverty reduction regardless of the measurement used, as it allows the sharing of income and wealth and gives poor individuals a chance to start their income activities.Most importantly, economic growth should ensure a positive change in income and living standards to claim that it is making a positive change.Accordingly, it seems motivating to empirically investigate the relationship between economic growth and poverty in a selected sample of developing countries.For that, the following hypothesis is developed.
H02.
The economic growth represented by GDP positively impacts poverty mitigation in developing countries.
Legal origin, banks, and poverty
Studies in the last decade have shown the relationship between law systems and the financial development of the state or companies operating in its territory, [65,66].Indeed, La Porta et al. (1999) believe that civil law states have a more critical role in regulating business than states with common law.Applying the empirical models to Islamic banks at different levels requires considering legal history; literature shows that countries whose legal systems are based on English common law differ from countries whose legal systems are based on French law, [67].The history of English and French law is a fascinating historical story with significant implications for institutional economics.Legal origin is used in literature as a proxy for exogenous historical factors, (et al., 2021).Nonetheless, in many nations, legal origin is considered a hindrance rather than an opportunity for Islamic banking, [68].Previously, one or more of the primary authors of this research discovered that their legal origin may influence the performance of Islamic banks in emerging nations, [29,30].As a result, we propose the following hypothesis.
H03.The legal origin positively impacts poverty mitigation in developing countries.
The nexus between Islamic banks and poverty reduction
The Islamic banking industry contributes to economic expansion by supplying employment and investing capital.However, quantifying the impact of several types of banking on poverty is still in its infancy.In addition, the financial accounts of these banking institutions always contain a gift to the impoverished in the form of a donation known as Zakat [69,70].This sum is claimed to contribute to reducing poverty.Nonetheless, there is abundant opportunity to empirically examine this assumption by measuring financial progress using Islamic institutions as proxies.In high poverty-level clusters, overall assets, deposit funds and Islamic bank funding show a substantial negative link with poverty [71,72].
In addition, increasing funding for Islamic banks will reduce poverty.However, growth in total assets and the number of branch networks will impact the rising poverty levels, as will an increase in total assets and network branches.One possible explanation is that a large portion of community savings backs Islamic banks' assets and that Islamic banks' distribution of financing to the community is still being determined [73].
Achieving sustainable development goals (SDGs) has been taken into consideration by Islamic banking research, particularly in poverty reduction and economic growth; the findings indicate that Islamic bank-specific and macroeconomic variables have a S. Mohammed et al. substantial influence on poverty reduction and economic development, meaning that the Islamic Banking Industry (IBI) has the potential to achieve SDGs and hence supports its promotion (Siddique et al., 2020).Furthermore, Wajdi Dusuki (2008) proposes that one of the potential substitutes for Islamic banks to channel money to the needy is using a Particular Purpose Vehicle (SPV).
In conclusion, investigating the contribution of the Islamic banking sector to poverty reduction is a vacuum in the literature that might be filled; this is what we are attempting to achieve in this research by compiling a list of nations with Islamic finance businesses.Fig. 1 depicts the conceptual framework in which total assets quantify the contribution of Islamic banks to Gross Domestic Product.
Conceptual framework
The model below (see Fig. 3) shows that poverty has been described as the dependent variable, whereas Islamic banks are presented as the independent variable.The independent variable is measured by three main measures: total assets, GDP and the legal region.
Source of data
Islamic banks are banks that adopt a system of banking based on profit and loss sharing rather than interest-earning; the guiding principles of Islamic banks are the Islamic laws, and we measure the contribution of Islamic banks towards economic growth by the total asset of Islamic banks compared to the economic growth in the specific country.The data about Islamic banks' total assets are not entirely complete; the source of data is the World Database for Islamic Banking and Finance (WDIBF).The data concerning the economic growth of each country, as well as data on the poverty rate are derived from the World Bank database.Ultimately, the legal environment data is sourced from articles [30].The data covers 12 developing countries from the fourth quarter of 2013 to the last quarter of 2019; the missing data, particularly in the poverty proxy, has been a challenge for the researchers; sometimes, we found complete data for a quarter, but for some other quarters, we have dealt with data mathematically and approximately for some countries.The data was analyzed using the Stata software, pooled linear regression was applied, and the panel fixed the effect and random effect.The appropriate results have been selected based on the Hausman test result.
Table 2 describes the study's variables: the poverty rate, which measures the poverty percentage in the population; total assets, which calculate the total assets of Islamic banks; and GDP, which measures economic growth.Finally, legal origin measures the legal background.Table 3 indicates the countries selected for the study, depending on the availability of data about Islamic banks and poverty.
Data analysis and Interpretation
For more accurate results, the study used the same model applied by (R. et al., 2005), which has been adopted from Ravallion (1996); it is named as a growth-poverty model.It was used to investigate the impact of international migration on remittance and poverty in developing countries.Thus, the same steps are followed; the financial development is proxied by Islamic banks' total assets to the Gross Domestic Product.In contrast, poverty is measured by the poverty rate and economic growth by GDP.However, the legal environment of different countries is measured by a dummy variable, and it indicates the legal system of each country starting from 0 to 6. See (Eq (1)).
• Poverty Reduction: The dependent variable is the change in the poverty rate from one period to the next.
• Total Assets of Islamic Banks: An independent variable, measured as the total assets of Islamic banks as a percentage of GDP.
• Economic Growth: An independent variable measured as the real GDP growth rate.
• Legal Origin: An independent variable, measured as a dummy variable, takes a value of 1 if the legal system is based on French civil law and 0 if otherwise.• β0, β1, β2, β3: The parameters to be estimated.• β1: The coefficient on total assets of Islamic banks which represents the change in poverty reduction associated with a one percentage point increase in the ratio of total assets of Islamic banks to GDP. • β2: The coefficient on economic growth represents the change in poverty reduction associated with a one percentage point increase in the real GDP growth rate.• β3: The coefficient on legal origin represents the difference in poverty reduction between countries with French civil law legal systems and those with other legal systems.See (Eq (2)) The pov is the measure of poverty in country i at the time r; GDP is the Gross Domestic Product of the country; total assets is the total assets of the Islamic bank divided by the Gross Domestic Product; poverty rate measures people below the poverty line; legal origin represents the legal background of the country; consistent with the poverty growth model explained by Ravallion [74].More assets of the Islamic bank has in general, represents growth in financial development, which increases the GDP growth and reduces poverty.Islamic banks are expected to improve economic growth, thus reducing poverty.The model analysis involved pooled regression, and it was made sure that the data was declared panel data.Finally, the fixed effect ordinary least square and random effect ordinary least square were used.
Table 4 shows the descriptive statistics of the study variables, i.e., the poverty rate for the poverty variable, the total assets for the financial institutions, the GDP for the economic growth, and finally, the legal origin.
Table 2
Variables description and source.
Poverty rate
Percentage of people living below the poverty line from the total population.The globaleconomy.com
Total-Assets
Bank total assets are numbers taken from the balance sheets.World Database of Islamic Banking and Finance.
GDP
Gross Domestic Production in US dollars.
Source: Secondary Data Source: Analysis of Secondary Data S. Mohammed et al.
Table 5 above discloses the correlation among the study variables.The correlation between the GDP and the poverty rate is 62 %, whereas the correlation between the legal origin and the poverty rate is 33 %.The correlation between total assets and Poverty rate is − 0.138 % Besides, the connection between the total assets and the GDP is reported as a weak correlation with 0.0245 %.Finally, the total assets and the legal origin have a 36 % correlation, representing a moderate connection too.The highest correlation among the study variables is found between the GDP and Poverty rate.There was a negative correlation between the total assets of the Islamic banks and the Poverty rate.
Results
The analysis started with pooled linear regression; however, the model was not deemed suitable for an average pooled linear regression analysis.Then, we declared the data as a quarterly panel data set and began with the fixed and random effects.According to the Hausman test, the random effects model makes more sense.It was found that there is a significant negative relationship between the total assets of the Islamic banks and the poverty rate at 5 %; the more the total assets of the Islamic banks, the less poverty is achieved.At the same time, GDP is significantly related to the poverty rate, which has a positive relationship; this indicates that the economy's growth corresponds to a higher poverty rate.The random effect analysis gave us the same result; no significant difference exists between random and fixed effects.However, we applied the Hausman test to examine which test result was appropriate; the Hausman result indicated that the random result was the appropriate one, see Table 6.
To sum up, the results indicate that when the total assets of Islamic banks as a measure of financial development showed that there is a significant negative relationship at 5 % with poverty, the additional ownership of assets by Islamic banks corresponds to a reduction in poverty, these results support the those from (Donou-Adonsou & Sylwester [75] and Honohan [76] who also found that financial development lowers poverty.However, the surprising result is that economic growth is positively related to poverty; this result supports those from Ref. DFID ( [77]) and Seven & Coskun [78].The reasons could be attributed to the need to add additional variables.Islamic banks showed a negative impact on poverty reduction because they serve small and medium enterprises, which middle-and low-income people usually own.Our results add to the more excellent knowledge of how Islamic banking might impact poverty reduction from a theoretical standpoint.Islamic banks must put aside a part of their assets for charity purposes and prioritize providing financial support to small and medium companies (SMEs) owned by persons with middle and low incomes.This helps to redistribute wealth and encourages financial inclusion.This aligns with Stakeholder Theory, which underscores the importance of enterprises generating value for all stakeholders, including the wider community.
Robustness
We thoroughly evaluated the strength and reliability of our results by following a rigorous set of scientific procedures and including additional factors.At first, we used a pooled linear regression model to examine the data.Nevertheless, this method was considered unsuitable for our dataset.As a result, we classified the data as a quarterly panel data set and used fixed and random effects models to address any possible differences across observations.
To strengthen the reliability of our study, we used supplementary variables such as inflation rates.By including inflation rates, one can effectively account for macroeconomic factors that may independently impact poverty levels, apart from the variables being primarily studied.This incorporation guarantees that the exclusion of relevant factors does not influence our findings and that the associations we discover are not distorted by inflationary forces.The Hausman test was used to ascertain the optimal model choice between fixed and random effects.The test findings demonstrated that the random effects model was more appropriate for our investigation, as shown by a non-significant Chi-square value (p-value = 0.558).This indicates that the effects distinct to each person are not tied to the independent variables, which implies that the random effects model is a more suitable choice.
The results of our study indicate a strong inverse correlation between the overall assets of Islamic banks and the poverty rate, with statistical significance at the 5 % level.More precisely, as the overall assets of Islamic banks grow, poverty decreases, suggesting that the development of Islamic banking plays a role in reducing poverty.The findings of this study align with the earlier research conducted by Donou-Adonsou and Sylwester [5] and Honohan [76], which also concluded that financial development reduced poverty.Furthermore, our examination uncovered a substantial positive correlation between GDP and the poverty rate, indicating that an increase in economic development is associated with an elevation in poverty levels.This unexpected conclusion is consistent with the research conducted by DFID [47] and Seven & Coskun [78], which indicates that the advantages of economic progress may not be uniformly shared, thereby worsening poverty in some circumstances.
Upon examining the inflation rate, we discovered that it had a substantial influence on poverty rates.Rising inflation tends to gradually diminish the ability to buy goods and services, particularly impacting those with lower incomes and perhaps leading to an increase in poverty rates.By including inflation into our model, we guaranteed that the observed correlations between the total assets of Islamic banks, GDP and poverty were not influenced by inflationary impacts.The robustness of our conclusions is further strengthened by the consistency seen in both fixed and random effects models, even after accounting for the inflation rate.Regardless of the many model parameters used, the fundamental conclusions remain consistent.The random effects study substantiated the adverse influence of Islamic banks' aggregate assets on the alleviation of poverty, as well as the positive correlation between GDP and poverty.Additionally, it underscored the damaging impact of inflation on poverty.
To summarize, the findings of our research are strong and dependable, backed by thorough statistical analysis and the incorporation of supplementary control factors like the inflation rate.The strong inverse correlation between the aggregate assets of Islamic banks and poverty highlights the vital function of Islamic banking in advancing financial inclusivity and mitigating poverty.
Total assets of Islamic banks and poverty rate
According to the findings of this study, non-traditional banks play an essential role in poverty reduction and prosperous economic growth in developing nations, particularly in countries with standard law legal systems.Non-mainstream banks, particularly rural ones, give more loans to SMEs and households than mainstream banks.This is significant since SMEs are the backbone of many developing countries and play a key role in employment creation and poverty reduction.The findings also reveal that Non-Traditional banks are more likely to innovate and provide new financial products and services tailored to the needs of low-income communities.Non-traditional banks, for example, frequently offer mobile banking services and microcredit loans, essential for reaching people in rural areas and the unofficial sector.The result supports the findings of Alfian et al. [10], Martiana & Rahmanto [79], Muflih [17], Nugroho et al. [11], and Rashid & Intartaglia [3],who proposed that there is a role for Islamic banks in alleviating poverty.
Economic factors and poverty rate
Furthermore, the findings indicate that the influence of Non-Traditional banks on poverty reduction appreciates when economic growth is greater in countries with common law legal roots.This is an indication of the importance of inflation rate and gross domestic production growth.This result is in accordance with Ravallion [64].Overall, this study provides solid evidence that non-mainstream banks may be used to promote poverty reduction and economic growth in developing countries.According to the results, policymakers should encourage the development of Non-Traditional banks and provide a supportive regulatory environment for these institutions.This has been conducted by studies that explored the impact of financial development on poverty alleviation [5,38,80].However, further justification could be made to argue the result by proposing an additional variable representing the role of Islamic banks in alleviating poverty.
Legal origin and poverty rate
The control variable, legal origin, correlates moderately (36 %) with the poverty rate.This implies that there is a positive relationship between countries having a common law legal background and the number of Non-Traditional banks compared?to previous studies [30,34,35].A possible explanation for the increased influence of non-mainstream banks in common law countries is that common law legal systems will foster more financial innovation and competition.Standard law systems are based on precedence, providing more flexibility and adaptation to changing market conditions.On the other hand, civil law systems are based on codified laws, which can be more strict and less conducive to innovation [34,35].
Another argument is that common law countries have more robust mechanisms for protecting property rights and enforcing contracts.This is significant because non-mainstream banks frequently rely on unsecured lending, which means they lend money without collateral.Non-traditional banks must be sure they can recover their money if borrowers' default on their loans to be ready to provide unsecured loans.Substantial property rights and contract enforcement institutions reduce the danger of default, making it more appealing for Non-Traditional banks to lend to high-risk borrowers.Finally, common law countries may have a more developed culture of entrepreneurship and risk-taking.
This study is based on a somewhat limited sample of developing countries.Further research is required to corroborate the study's findings and to identify the specific elements that drive the influence of Non-Traditional banks on property reduction and economic growth.
Conclusion & recommendations
Poverty and unemployment have always been challenges before the growth and development of any given state.The continuous increase in the population rate and the scarcity of available resources lead to more people living in poverty.Despite the constant effort to mitigate poverty and to create new job opportunities worldwide, particularly in the developing world, it is still believed that there is a considerable share of people who live in a catastrophic situation and sometimes with no stable income.Governments and the private sector may play a key role in poverty reduction.The provision of necessary physical and financial infrastructures by the governments for investments may encourage the private sector and, on top of them, the financial institutions to invest and contribute to creating job opportunities, which leads to poverty reduction.Financial institutions, in general, and Islamic ones, in particular, may function as a means for economic growth and development by providing necessary funds for those individuals desiring to start their incomegenerating activities.For that, there is a need for financial development to ensure maximum benefits for society.Therefore, this paper examines the relationship between financial development, economic growth, and legal origin with poverty.Unlike earlier studies, financial development is measured by Islamic banks' total assets/gross domestic product; poverty rate measures poverty as an indicator of poverty rate; and legal systems between developing countries are measured by legal origin.
The study employed a poverty growth model suggested by Ravllion (1997), regressing poverty against total assets/GDP, GDP, and legal origin, applying fixed effect and random effect techniques to a panel of 12 developing countries from the fourth quarter of 2013 to the second quarter of 2019.The findings report that poverty reduction is achieved when an Islamic bank's total assets measure financial development; the economic growth does not show a significant negative relationship; instead, it indicates a significant positive relationship.Moreover, the bank has a high impact on poverty.The study further writes that the development of Islamic banks will contribute to combating poverty, which has been reported by previous studies arguing that financial development in banks reduces poverty.The current study's results align with those studies, at least when measured by Poverty rate, [7,48,75].Poor access to financial services is still an obstacle; fees imposed on them to get permission to participate in the financial and business markets may have been the reason for poor development.The study recommends extending future studies to obtain more data about Islamic banks, the poverty gap, and the poverty headcount ratio; they may also include more indicators.
Limitations
The present investigation offers novel insights into the relationship between financial growth and poverty alleviation; moreover, it is restricted to nations with Islamic banking systems and needs more data.As a result, the scope and amount of data examined may be expanded in subsequent studies.Ordinary least square analysis was used in the research.However, other analytic techniques, such as ARDL or GMM, are advised to investigate the correctness of the findings further.We urge future scholars to include additional social and cultural factors in examining Islamic banks' performance.The study's context can only be broadly applied once a more microscopical analysis of a single nation is conducted, as every nation has a unique charter regarding Islamic banking and poverty.We attempted to add a variable on political unrest, but we eliminated it after seeing that it was correlated with economic issues.
Establishing causality between the total assets of Islamic banks and poverty reduction is challenging.Although the study demonstrates significant correlations, definitive causality cannot be established due to potential issues such as reverse causation and unobserved confounding factors.Additionally, the study employs a specific measure of the poverty rate, which may only encompass some dimensions of poverty.Utilizing alternative metrics, such as the Multidimensional Poverty Index (MPI), could provide a more comprehensive understanding of poverty reduction impacts.
Implication
Islamic banks have the potential to influence poverty reduction significantly: According to this research, expanding Islamic banks has a more significant beneficial influence on reducing poverty than traditional financial institutions.This information may be helpful to policymakers looking for different approaches to combat poverty.Contrary to specific predictions, current research suggests that economic expansion is not linked to greater poverty in this particular circumstance.Therefore, economic development can occasionally render poverty worse.This research casts doubt on long-held beliefs about the detrimental effects of growth on the impoverished.It indicates that measures for economic development could be undertaken in tandem with initiatives to reduce poverty.Hail's development plans have the potential to greatly improve its competitive edge by effectively resolving the long-standing problems of poverty and unemployment, which have consistently impeded economic progress in the past.Hail may promote economic stability and job development by giving priority to investments in physical and financial infrastructure and by cultivating collaborations with Islamic financial institutions.These steps are expected to attract more private sector investments, therefore enhancing economic growth.Implementing certain financial methods can help reduce poverty, leading to a more inclusive and successful community.Therefore, Hail's efforts to develop will help achieve long-term economic growth, strengthening its competitive position in the area.
Fig. 2 .
Fig. 2. Stakeholder theory in banking, clients as one of the stakeholders.Source: VB bank website, Zurich
S.Mohammed et al.
Table 3
Countries selected for the current study.
Table 6
Result of the Regression (Fix effects -Random effects) analysis. | 2024-07-27T15:17:18.800Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "e26839945e14b67aa24edb02c7a196c2ac17ad9e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844024111954/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f80ca98fbf396315f515ab9baff95af1399eafc",
"s2fieldsofstudy": [
"Economics",
"Business",
"Political Science"
],
"extfieldsofstudy": []
} |
118585057 | pes2o/s2orc | v3-fos-license | Manipulation of single-photon states encoded in transverse spatial modes: possible and impossible tasks
Controlled generation and manipulation of photon states encoded in their spatial degrees of freedom is a crucial ingredient in many quantum information tasks exploiting higher-than-two dimensional encoding. Here, we prove the impossibility to arbitrarily modify $d$-level state superpositions (qu$d$its) for $d>2$, encoded in the transverse modes of light, with optical components associated to the group of symplectic transforms (Gaussian operations). Surprisingly, we also provide an explicit construction of how non-Gaussian operations acting on mode subspaces do enable to overcome the limit $d=2$. In addition, this set of operations realizes the full SU(3) algebra.
I. INTRODUCTION
Most promising approaches for scalable quantum communication (QC) rely on the use of photons as the main carriers of information among remote nodes of quantum networks, where matter-based quantum memories are located [1,2,3,4,5]. Photons, besides being the natural candidate for QC due to their long decoherence time and the relative ease with which they can be manipulated, can actually encode multiple quantum bits of information (qubits) into various degrees of freedom. These include frequency, polarization, linear momentum and orbital angular momentum [6]. The possibility of simultaneously exploiting these degrees of freedom [7,8] is becoming increasingly appealing for the faithful mapping of quantum states between light and matter [3]. A fundamental question then arises: What are the most general photon state manipulations allowed by benchmark optical components? It is of paramount importance to obtain a clear representation of all such state mappings to further develop a truly multi-degree-of-freedom photon state engineering.
Here, we address the problem of whether, by resorting to the symplectic group of optical transformations on spatial transverse-field modes, it is possible to perform arbitrary manipulations on photon states encoded in large, but finite, d-dimensional superpositions of these modes (qudits). This is relevant for non-dichotomic QC protocols, which include those exploiting multimode squeezing [9] and the orbital angular momentum (OAM) of light [10,11,12,13,14,15,16,17]. For OAM, one of its main distinguishing features is the access to, in principle, an infinite-dimensional Hilbert space expanded by cylindrically-symmetric paraxial eigenmodes (e.g. the Laguerre-Gaussian basis) [18,19,20,21]. Spatial encoding conveys several independent channels of information that could be very useful in quantum cryptographic schemes with larger alphabets [22] and security enhancement against eavesdropping [23]. Even for quantum computation applications, the high-dimensional aspect would enable to optimize certain computing arquitectures [24].
A necessary condition to perform arbitrary unitary operations on a pure quantum state |ψ = d j=1 α j |j , consisting of a d-dimensional superposition of orthogonal eigenmodes |j , is to modify in a controlled way each of the complex coefficients α j . In most of the experimental realizations oriented towards the use of spatial degrees of photons for high-dimensional encoding, phase holograms and reconfigurable spatial light modulators have been employed to approximately manipulate specific combinations of optical transverse modes [6]. In practice, however, these elements do not strictly preserve paraxiality but, rather, behave as non-unitary transformations, thus constituting a source of mode noise that eventually destroys the desired large, but finite, multidimensionality of the quantum states to be exploited. Our first main result shows that when these, or any combination of, optical elements belong to the group of symplectic transformations (which include Gaussian operations), it is impossible to arbitrarily modify single-photon qudit states for d > 2, via unitary operations generated by those transforms. Hence, a clear motivation emerges: Is it possible to find transformations on paraxial modes which allow one to really overcome the limit d = 2? Our second main result provides a positive answer to this question; we present a set of non-Gaussian operations that truly enable us to arbitrarily manipulate (up to global phases) single-photon qutrit (d = 3) states. Furthermore, this set of operations constitutes a SU(3) algebra.
The paper is organized as follows: Section II gives a brief summary of the formalism on symplectic groups and transformations in the optical phase space. In Section III we introduce and characterize the most general representation of unitary (metaplectic) operators corresponding to all possible optical symplectic transformations that can be performed on transverse field modes. Section IV provides the first main result of our paper; we prove that, via the group of symplectic transforms acting on superpositions of paraxial modes, it is impossible to implement operations that change arbitrary qudit states onto any other qudit for d > 2. In Section V we extend our analysis to non-Gaussian operations on these modes and present new routes towards the aim of truly manipulating arbitrary single-photon qudits. Section VI concludes the paper with a discussion of alternative approaches to implement controlled gates on single-photons using more than one of their degrees of freedom. A simple optical scheme for a CNOT gate exploiting OAM and polarization, is proposed.
II. SYMPLECTIC GROUP FORMALISM
To put in context the class of optical transformations referred to above, it is necessary to start by introducing the symplectic formalism that will be used extensively throughout the paper. We first recall that the dynamics of classical and quantum Hamiltonian systems has an underlying symplectic structure. Symplectic methods have been applied in the theory of elementary particles, condensed matter, accelerator and plasma physics, oceanographic and atmospheric sciences and in optics [25,26,27]. Fundamental to all of them is the phase space picture. Any classical system with n degrees of freedom is described by a set of pairs q j , p j (j = 1, 2, . . . , n) of mutually conjugate canonical variables. In the quantum domain one can associate to these variables the irreducible set of canonical Hermitian operatorsq j ,p j . The basic kinematic structure is provided by Poisson brackets in the former case and by the Heisenberg commutation relations in the latter. By assembling the canonical variables and operators into 2ncomponent vectors ξ = (q 1 , q 2 , . . . , q n , p 1 , p 2 , . . . , p n ) and ξ = (q 1 ,q 2 , . . . ,q n ,p 1 ,p 2 , . . . ,p n ), the Poisson brackets and the Heisenberg commutation relations can be cast, respectively, as {ξ α , ξ β } = Ω α,β and [ξ α ,ξ β ] = iΩ α,β , (α, β = 1, 2, . . . , 2n), where is the 2n-dimensional symplectic metric matrix. Of particular relevance are the real linear canonical transformations among quantum (classical) canonical quantities [28]. They preserve the Heisenberg (Poisson) relations and are represented by symplectic matrices S :ξ → ξ = Sξ, obeying the condition SΩS T = Ω. The set of all such 2n-dimensional real matrices forms the (2n 2 + n)parameter non-compact symplectic group Sp(2n, R). The power of symplectic formalism becomes apparent in the following general setting. Let H denote the Hilbert space of n-mode statesρ on whichξ α act. Given that for any S ∈ Sp(2n, R) the Hermiticity properties and commutation relations of theξ α are conserved, and sinceξ α act irreducibly on H, it follows from Stone-von Neumann theorem that one can define unitary operatorsÛ (S) on H, implementing SΩS T = Ω, such that [28] U † (S)ξÛ (S) = Sξ .
III. METAPLECTIC OPERATIONS ON PARAXIAL MODES
Within classical and quantum optics, the symplectic formalism has been extensively used both in studying mode-mapping properties of lossless first-order (paraxial or ABCD) systems [18,20,29,30,31] and in characterization of continuous-variable entanglement [32]. An important class of symplectic transforms is that of two-mode systems represented by the symplectic group Sp(4, R). For instance, bipartite Gaussian operations, which preserve the Gaussian character of the Wigner functions, belong to Sp(4, R). Let us make explicit the form of all possibleÛ (S) when S ∈ Sp(4, R). They give rise to the unitary metaplectic representation of Sp(4,R) acting on H. All these unitary operationsÛ (S) are generated by ten Hermitian operatorsĴ , quadratic inξ, that can be split into two sets [33]: passive and active generators. The passive set encompasses the maximal compact subgroup U(2): Here,â j = (q j + ip j )/ √ 2, (respectivelyâ † j ), j = x, y, are the two annihilation (creation) operators for all orthogonal transverse modes. The passive operators (3) have the form of the well-known Stokes operators. They obey the usual commutation relations [L i ,L j ] = i ε ijkLk (i, j, k = x, y, z), withL o being the only commuting element in U(2).
The active set is responsible for the noncompactness of Sp(4,R):K satisfying the following commutators: Below we elucidate the action of each of the ten generators on a spatial mode carrying OAM. As any arbitrary sequence of symplectic transformations S m is again another symplectic transformation S = m S m , one concludes that the most general metaplectic operatorÛ (S) corresponding to S is represented by a single exponential of i times real linear combinations of any of the above generators: withĴ ∈ {L,K,M}, and s a ten-parameter vector. When applied to photon number states, the passive (active) generators have a well-known interpretation: they conserve (do not conserve) photon number. Unlike active generators, which require nonlinear photon interactions, passive generators can be implemented with linear optical components: beam splitters and phase shifters. Now, despite the exact isomorphism between symplectic transformations on photon number states and spatial modes, they have quite distinct physical implications. To gain insight on how the metaplectic operator (5) affects spatial modes, we resort to the Wigner representation in conjunction with the Stone-von Neumann theorem (2). A revealing example is the following. Consider a Laguerre-Gaussian mode [6] LG ,p , where the indices = 0, ±1, ±2, . . . and p = 0, 1, 2, . . . stand for the topological charge and the number of nonaxial radial nodes. Let W ,p (ξ) be the associated Wigner function, which, in the general case, is non-Gaussian [18,20,29]. For each of the ten generators (3) and (4) =1,p=0 (q) under the action of each passive and active generator. It can be seen that they produce fundamentally different mode-mapping geometries. Passive generators describe rotations on the orbital Poincaré sphere [18,20,34]. They preserve the order N ≡ | | + 2p of any of the modes lying on the sphere. GeneratorL o yields the mode-order,L z represents real spatial rotations on the transverse x − y plane containing the modes and is proportional to the component of the OAM operator along the propagation direction [20], with LG ,p being the eigenmodes.L x andL y represent simultaneous rotations in the four-dimensional phase-space:L x produces rotations in the x − p x and y − p y planes by equal and opposite amounts, whereaŝ L y gives rise to rotations in the x − p y and y − p x planes by equal amounts. The eigenstates ofL x are the Hermite-Gaussian HG nx,ny modes, where n x , n y are nonnegative integer indices, and their mode-order is N ≡ n x + n y . Both Laguerre-and Hermite-Gaussian bases are unitarily related: LG ,p transforms into HG nx,ny via e −i(π/2)Ly . We note in passing that the interferometric scheme proposed in Ref. [35] to measure the OAM spectrum (i.e. the index-decomposition) of a light beam, which only involvesL z , could be generalized to determine the complete Hermite-Gaussian spectrum by replacingL z witĥ L o andL x [36]. In contrast with passive generators, the active ones scale (squeeze) the spatial modes and change the order N , giving rise to infinite mode-superpositions. Of these, onlyK z andM y suffice to describe, jointly with the set (3), the general metaplectic operator (5) by recourse to the following passive-active-passive de-compositionÛ = e −iµ·L e −i(νyMy+νzKz) e −iη·L , still requiring ten parameters. The symplectic matrices associated with both passive and active generators can be implemented with a small arrange of (< 10) spherical and/or cylindrical lenses solely controlled by variations of the focal lengths and/or rotations along the system axis [36,37,38]; that is, with simple linear optical components.
Any qudit requires, at least, 2d independent real parameters, albeit normalization and invariance of (7) under a global phase reduces this number to 2(d−1). A necessary condition to fully manipulate a single-photon state (7) is to arbitrarily modify the d complex coefficients c nx,ny (e.g. it should be possible to set all coefficients c nx,ny equal to zero, except for one of them), leaving invariant the d-dimensional subspace H d expanded by {|n x , n y }.
In other words, one must discard all transformations on (7) giving rise to modes not belonging to H d . Let us analyze the most important restrictions imposed by unitary operations acting on (7) and generated by the group of transformations S ∈ Sp(4, R). First, notice that since the general metaplectic operator (5) involves ten generators, d-dimensional superpositions with d > 6 cannot be arbitrarily transformed within Sp(4, R). This fact, of course, does not preclude the possibility to manipulate qudits with d ≤ 6. A key observation is the recognition that finite dimensional representations of Sp(4,R) are necessarily nonunitary, owing to the noncompactness of Sp(4,R). That is, the noncompact part of Sp(4,R), represented by the active generators (4), is to be excluded from the set of symplectic transformations in order to maintain the subspace H d finite. Otherwise, the qudit (7) would become an infinite superposition of all HG modes under the general action of (5). More explicitly, letP d be a projector in H d , so that it fulfillŝ P d |ψ = |ψ . Notice that if the metaplectic operator (5) must keep (7) in H d , thenP dÛ |ψ =Û |ψ , which implies thatÛ −1P dÛ =P d . This last condition should hold for any choice of the parameters s in (5), and so it follows that the commutator [s ·Ĵ ,P d ] = 0. However, this vanishing commutator is incompatible with the presence of the active generators (4). One is therefore limited to the compact subgroup of Sp(4,R), i.e. to the set of passive generators (3), to perform unitary operations on (7) leaving the subspace H d invariant. This means that the metaplectic operator (5) reduces to the passive oneÛ L = e −i(soLo+sxLx+syLy+szLz) , which, now, only contains the four free parameters {s o , s x , s y , s z }. Consequently, one must confine to three-dimensional subspaces H d , with state (7) being a qutrit. In principle, the intervening modes in (7) can have different order N . However, sinceÛ L preserves N , this implies that it would then be impossible to attain the mode transformation |n x , n y → |n x , n y when n x +n y = n x +n y . There is still the possibility that the three modes could have the same order. In this particular case, taking into account that L o |ψ = N/2|ψ and [L i ,L o ] = 0, one can express the action ofÛ L asÛ L |ψ = e −isoN/2 e −i(sxLx+syLy+szLz) |ψ . Up to a global phase, there are only three free parameters {s x , s y , s z } to carry out the general transformations on the state (7), which are insufficient even for qutrits (as they involve, at least, four free real parameters). We have thus proven the following result: Proposition.-It is impossible to arbitrarily modify the d-dimensional mode-superpositions of single-photon pure qudit states (7) for d > 2, via unitary operationsÛ (S) generated by symplectic transforms S ∈ Sp(4,R).
Several comments are in order. Our proposition is expected to also hold for mixed states. Arbitrary operations on qubits (d = 2) are not prohibited within the subgroup U(2) of Sp(4,R), as it should be [20]. Operatorŝ U L acting on higher-than-two mode superpositions (7) restrict the possible values of coefficients c nx,ny ; the higher the value of d the larger the number of constraints on c nx,ny . In fact, it is easy to show that, within Sp(4,R), the most general finite superpositions (7) reduce to the well-known Dicke or spin coherent states [39] (which depend only on two real parameters, leaving aside global phases). Although we have identified spherical and cylindrical lenses as the main optical elements of Sp(4,R), the above proposition also affects phase holograms belonging to the metaplectic representation of Sp(4,R). Not all unitary and paraxial transformations are in such representation. It is an open question to determine the entire class of unitary operators outside the metaplectic representation of Sp(4,R) that leave invariant the bases of paraxial modes; but it would definitely fall into the category of non-Gaussian operations. In the next Section we partially clarify this question; we find examples of non-Gaussian operations which preserve paraxiallity and allow us to overcome the limit established by the preceding proposition. Moreover, since Laguerre-and Hermite-Gaussian bases are unitarily connected, our proposition also establishes the impossibility to achieve arbitrary qudit gates on multi-dimensional superpositions of modes bearing OAM. Given that the concept of OAM of light is only strictly meaningful within the paraxial approximation [6,20], encoding photons in non-paraxial modes would generally couple polarization and OAM making most QC tasks in such scenarios extremely difficult.
V. NON-GAUSSIAN OPERATIONS
In contrast with Gaussian operations, non-Gaussian operations remain to be fully explored. It has been recognized that non-Gaussian operations could represent an advantage to perform some quantum information tasks. Non-Gaussian operations on continuous variables allow the access to results beyond no-go statements concerning Gaussian operations. For instance, quantum speed up is impossible for harmonic oscillators by Gaussian operations with Gaussian inputs [40]. Distillation of Gaussian bipartite entanglement is also impossible by performing only Gaussian local operations and classical communication based on homodyne detection and requires non-Gaussian operations [41]. It has been experimentally demonstrated that entanglement between Gaussian entangled states can be increased by conditional subtraction of single photons from the Gaussian beams [42]. For universal quantum computation with continuous variable cluster states [43] it is necessary, at least, one non-Gaussian projective measurement.
In the present scenario, our preceding proposition imposes a restrictive limit for Gaussian operations on arbitrary superpositions of spatial mode states. However, in this section, we report a new class of non-Gaussian operations which enable us to fully manipulate superpositions of three-level states (qutrits), beyond the restrictions imposed by our above no-go proposition. Furthermore, this class of operations forms a complete set of single qutrit gates fulfilling a SU(3) algebra.
Consider the eight following generators acting on the Hilbert space of Hermite-Gaussian modes: Quite remarkably, these generators, within the subspace expanded by the Hermite Gaussian modes H T = . Unitary operatorsÛ Γ1 generated by the first triad give rise to superpositions between the two modes |1, 0 and |0, 1 , leaving invariant the fundamental Gaussian mode |0, 0 . UnitariesÛ Γ2 andÛ Γ3 , generated by the second and third triad, produce superpositions between the two modes |0, 0 and |1, 0 (leaving invariant |0, 1 ), or the modes |0, 0 and |0, 1 (leaving invariant |1, 0 ), respectively. In contrast withÛ Γ1 , the action ofÛ Γ2 andÛ Γ3 on H T gives rise to a new feature: non-conservation of the mode order. However, this non-conservation is fundamentally different to the one encountered in the noncompact representation of Sp(4,R), since it preserves the subspace H T . Figure 2 summarizes the action of the three unitariesÛ Γ on the subspace H T . It is worth mentioning that all these non-Gaussian operations can be implemented with passive optical elements having higher-than-first-order aberrations (nonquadratic refractive surfaces) [44]. An open and interesting problem would be the extension of the Stone-von Neumann theorem (2) to the case of our qubic generators (8). This would enable one to find the explicit form of the symplectic transform and thus the construction of the associated optical system.
Complete manipulation of the qutrit |ψ = c 0,0 |0, 0 + c 1,0 |1, 0 + c 0,1 |0, 1 is now possible using the SU(3) group (8), although to produce a general qutrit (up to a global phase) it suffices to perform the following sequence of operations: starting, for example, with an input fundamental Gaussian mode |0, 0 subjected to the unitary operatorÛ Γ2 , one can obtain Then, taking into account the closed SU(2) algebra obeyed byÛ Γ1 , it follows that With this specific operation structure, using (9) and (10), we can construct a general normalized qutrit state (up to a global phase): Since the four parameters θ, θ , ϕ, ϕ can be varied independently during the process, not all generators in the set (8) are actually needed to produce any qutrit encoded only in paraxial spatial modes. Notice that our results can also be extended to other physical scenarios, due to the isomorphism of the formalism, although in our case an additional motivation is provided by the simplicity of their experimental implementation.
VI. DISCUSSION AND CONCLUSIONS
In spite of the stringent limits raised by our results of Section IV, we have clearly shown how non-Gaussian operations can circumvent most of those difficulties. It is also worth emphasizing that there exist other alternative approaches, exploiting the spatial encoding of light, to fully manipulate higher-than-two-dimensional Hilbert spaces for various quantum information tasks. These approaches rely on the use of several degrees of freedom of light, albeit they cannot attain very large subspacedimensionalities. To illustrate, consider that we perform transformation (10) and, analogously to transformation (9), we wish to produce a general (up to a global phase) single-photon qutrit state. Instead of (9), we can use another degree of freedom of the same photon (e.g. polarization). To do so, we need both a complete set of single-photon qubit gates in each of the degrees of freedom and a conditional gate between the two involved degrees of freedom. For instance, an efficient linear singlephoton CNOT gate in which photon polarization acts as the control-qubit on the other photon degree of freedom, OAM, which plays the role of the target-qubit, is feasible with current technology. There are several possible routes, such as with space-variant optical axis phase plates made of nematic liquid-crystals [45,46], or using a Mach-Zender configuration [47]. A conceptually simple scheme is depicted in Fig. 3. This gate includes one polarizing beam splitter (PBS) and two pairs of cylindrical lenses (CL) whose bases subtend a 45 • angle with the plane of the interferometer. This interferometer resembles previous Sagnac interferometers used for measuring the spatial Wigner function [48] and for other single-photon quantum gate demonstrations employing polarization and continuous variables [49]. There, the inner arm of the interferometer contained a Dove prism. Here, the presence of cylindrical lenses constitutes the key feature to exploit photon OAM. According to the input photon polarization state (horizontal or vertical), the input photon views the cylindrical lenses CL1 with a different orientation and experiences a mode transformation (LG → HG) depending on the particular value of . After exiting through the PBS, the second cylindrical lenses CL2 yields the mode transformation HG → LG, which completes the action of the polarization-OAM-CNOT single-photon gate. An even more fascinating scenario is the transfer of photons carrying OAM onto Bose-Einstein condensates [50] and their storage in electromagnetically induced transparency media [51]. In this respect, it would be very interesting to explore the possibility of mapping the correlations of photons entangled in OAM [10,12,13,14,16,19,21] on quantum holograms, allowing for the reconstruction of nonclassical states of light from a matter-based quantum memory.
In conclusion, we have shown that if single-photon pure qudit states are prepared in a d-dimensional superposition of spatial modes, it is impossible to arbitrarily change such mode-superpositions for d > 2, by solely resorting to unitary operations generated by symplectic transforms of the group Sp(4,R). Our results provide a complete characterization of linear canonical transformations on transverse optical modes and pose a considerable challenge on quantum communication protocols exploiting multidimensional spatial encoding: one cannot have a full access and control of large but finite-dimensional Hilbert spaces expanded by these modes. Implementation of a new class of paraxial non-Gaussian transformations to fully encode any arbitrary single-photon qudit state is required. We have provided an explicit construction of this new class of operations. Moreover, using the spatial encoding in combination with other degrees of freedom, one can overcome this problem though at the price of scalability. In this case, a conditional gate between the involved degrees of freedom is needed.
We thank useful discussions with G. Giedke and acknowledge financial support from the Spanish Ministerio de Educación y Ciencia through the Juan de la Cierva Grant Program and Projects FIS2005-01369 and Consolider Ingenio 2010 QIOT CSD2006-00019. | 2008-02-11T09:22:11.000Z | 2007-12-07T00:00:00.000 | {
"year": 2007,
"sha1": "a48ca86e523412c445504261c15bf5682cd0e4f7",
"oa_license": "CC0",
"oa_url": "https://ddd.uab.cat/pub/artpub/2008/115637/phyreva_a2008m1v77n1p012302.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "6438a392e8a2c91f9b96c23ebad9c6926dc33832",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237406676 | pes2o/s2orc | v3-fos-license | Effect of Estrous Cycle on Behavior of Females in Rodent Tests of Anxiety
Anxiety disorders are more prevalent in women than in men. In women the menstrual cycle introduces another variable; indeed, some conditions e.g., premenstrual syndrome, are menstrual cycle specific. Animal models of fear and anxiety, which form the basis for research into drug treatments, have been developed almost exclusively, using males. There remains a paucity of work using females and the available literature presents a confusing picture. One confound is the estrous cycle in females, which some authors consider, but many do not. Importantly, there are no accepted standardized criteria for defining cycle phase, which is important given the rapidly changing hormonal profile during the 4-day cycle of rodents. Moreover, since many behavioral tests that involve a learning component or that consider extinction of a previously acquired association require several days to complete; the outcome may depend on the phase of the cycle on the days of training as well as on test days. In this article we consider responsiveness of females compared to males in a number of commonly used behavioral tests of anxiety and fear that were developed in male rodents. We conclude that females perform in a qualitatively similar manner to males in most tests although there may be sex and strain differences in sensitivity. Tests based on unconditioned threatening stimuli are significantly influenced by estrous cycle phase with animals displaying increased responsiveness in the late diestrus phase of the cycle (similar to the premenstrual phase in women). Tests that utilize conditioned fear paradigms, which involve a learning component appear to be less impacted by the estrous cycle although sex and cycle-related differences in responding can still be detected. Ethologically-relevant tests appear to have more translational value in females. However, even when sex differences in behavior are not detected, the same outward behavioral response may be mediated by different brain mechanisms. In order to progress basic research in the field of female psychiatry and psychopharmacology, there is a pressing need to validate and standardize experimental protocols for using female animal models of anxiety-related states.
Anxiety disorders are more prevalent in women than in men. In women the menstrual cycle introduces another variable; indeed, some conditions e.g., premenstrual syndrome, are menstrual cycle specific. Animal models of fear and anxiety, which form the basis for research into drug treatments, have been developed almost exclusively, using males. There remains a paucity of work using females and the available literature presents a confusing picture. One confound is the estrous cycle in females, which some authors consider, but many do not. Importantly, there are no accepted standardized criteria for defining cycle phase, which is important given the rapidly changing hormonal profile during the 4-day cycle of rodents. Moreover, since many behavioral tests that involve a learning component or that consider extinction of a previously acquired association require several days to complete; the outcome may depend on the phase of the cycle on the days of training as well as on test days. In this article we consider responsiveness of females compared to males in a number of commonly used behavioral tests of anxiety and fear that were developed in male rodents. We conclude that females perform in a qualitatively similar manner to males in most tests although there may be sex and strain differences in sensitivity. Tests based on unconditioned threatening stimuli are significantly influenced by estrous cycle phase with animals displaying increased responsiveness in the late diestrus phase of the cycle (similar to the premenstrual phase in women). Tests that utilize conditioned fear paradigms, which involve a learning component appear to be less impacted by the estrous cycle although sex and cycle-related differences in responding can still be detected. Ethologically-relevant tests appear to have more translational value in females. However, even when sex differences in behavior are not detected, the same outward behavioral response may be mediated by different brain mechanisms. In order to progress basic research in the field of female psychiatry and psychopharmacology, there is a pressing need to validate and standardize experimental protocols for using female animal models of anxiety-related states.
INTRODUCTION
It is well-established that the prevalence of psychiatric disorders encompassing anxiety-related pathologies is much higher in women than in men (1)(2)(3). Women are also likely to experience more adverse reactions to some psychoactive drugs than men (4). The menstrual cycle is another significant influence on psychiatric pathology. For example, a perimenstrual exacerbation of symptoms has been reported in women diagnosed with a psychotic disorder; admissions to psychiatric hospital are more common during the peri-menstrual relative to nonperi-menstrual phase of the cycle (5). Some anxiety-related disease states in women are menstrual cycle specific e.g., premenstrual syndrome/premenstrual dysphoric disorder, or feature a worsening of symptoms in the premenstrual phase e.g., panic disorder (6).
Given the clinical finding, it is perhaps surprising that animal models of fear and anxiety, which form the basis for research into drug treatments for humans, have been developed almost exclusively, using males. The sex bias in neuroscience and biomedical research is startling. A survey in 2007 comparing studies of behavioral pharmacology using rats and mice published in 5 reputable journals revealed that more than 80% used only male models (7). Ten years later the situation had hardly changed (7) despite the requirement by NIH and an increasing number of grant awarding bodies worldwide for consideration of sex differences in research proposals (8). Although the situation is now improving, there still remains a paucity of work using females and the available literature on sex differences presents a confusing picture.
Much of the reticence toward working on female animal models stems from the perceived difficulties and variability introduced by the cyclical variation of female sex hormones during the estrous cycle. Since steroid hormone molecules are generally lipophilic, they pass readily through the blood-brain barrier so that the female brain functions in a constantly changing chemical milieu. It should be mentioned that sex hormones may also impact on male behavior, although this is invariably neglected in studies using males for screening of drugs or to unveil the neural/chemical bases of psychopathalogies. Testosterone for example, which influences male dominance, also has a positive effect on the emission of stress-induced 22-kHz calls (9).
To overcome the source of hormonal variability in females, one strategy has been to ovariectomise animals, thereby stabilizing hormone levels. Against a stable baseline, exogenous hormones can be added back in a controlled manner to study their effects on brain circuitry and behavior. This approach has merit in that it has revealed much important information about the cellular actions of different neuroactive steroid hormones at genomic (nuclear) and non-genomic (membrane) levels (10), as well as the impact of artificial manipulation of hormones on behavior. But by its very nature, neutering removes the essence of what it is to be female. The plummeting hormone levels following the procedure may trigger adverse behavioral changes. Ovariectomy can precipitate anxiety-and depressivelike behaviors in female rats (11,12). Indeed, in young, i.e., premenopausal women, hormone replacement therapy is offered following surgical hysterectomy and/or oophorectomy, precisely to prevent the development of adverse emotional states and cognitive decline (13).
There is mounting evidence that responsiveness to drugs with anxiolytic effects, including alcohol, can vary during the estrous cycle (14)(15)(16)(17)(18)(19). An understanding of the changes in brain neurochemistry during the estrous cycle is therefore fundamental to the development of targeted pharmacological treatments for women. Consideration of estrous cycle-linked effects on behavior must not be overlooked and should be incorporated into the design of female animal models of psychiatric pathologies.
Progress toward this goal is however, dependent on the availability of models that are sensitive to estrous cycle stage. At present the literature presents a confusing picture. Choice of behavioral test and strain of rat or mouse as well as differences in housing environment and experimental protocols are likely only some of the sources of variability between laboratories. The lack of universally accepted criteria for staging the estrous cycle undoubtedly introduces another significant source of variability.
Clinical practice in psychiatry and psychobiology builds on the use of appropriately relevant and robust animal models (20)(21)(22), never more so than in relation to development of sex-specific pharmacology for treatment of affective disorders in women. In this short review we highlight a number of commonly used tests of anxiety-and fear-related behaviors that were developed and validated in males. We consider the limited information available on how females behave in these tests, whether there are sex differences in responding and in particular, whether the estrous cycle influences responding in females.
The Estrous Cycle
The estrous cycle in rats and mice, the most commonly used species for behavioral research, is characterized by a four or sometimes 5 day long cyclic variation in secretion of ovarian hormones. The duration of the cycle may be less consistent in mice, varying from 2 to 8 days (23). During this time the two major sex hormones: estrogen (17β estradiol in rodents) and progesterone undergo dramatic out-of-phase fluctuations in the level of secretion ( Figure 1A). Since these lipophilic steroid molecules pass readily through the blood brain barrier, their concentration in the plasma is followed by parallel changes in concentration in the brain where both hormones are neuroactive, acting on genomic (nuclear) as well as at membrane-bound receptors, the latter leading to rapid non-genomic effects on membrane excitability (29).
There are no universally accepted criteria for defining cycle stage and much confusion arises in the behavioral literature with respect to estrous cycle stages. Classification of estrous cycle stage is undoubtedly one of the major factors that contributes to variability and lack of reproducibility of results obtained by different laboratories using rodents to model anxiety-related disorders in women. For behavioral experiments, sampling plasma hormone levels is not practicable as a routine procedure to assess gonadal status. However, changes in peripheral hormone levels are reflected by a changing vaginal cytology as the reproductive tract is primed to prepare for pregnancy. In nonmated, single sex-housed females, the cycle can be subdivided into a number of stages based on vaginal cytology. This provides a convenient, if imprecise, surrogate for the changing hormonal profile within the brain.
Samples for cytological evaluation are obtained from the vaginal wall, either by lavage (flushing the vagina with water or buffer) or by inserting a probe into the vagina to obtain a cell sample. Three types of cell can readily be distinguished: broken lines indicate midnight. Approximations of cycle stages are marked by broken lines set arbitrarily at midnight. P, proestrus; E, estrous; ED, early diestrus (diestrus I); LD, late diestrus (diestrus II). Adapted from Smith and coworkers (24). (B) Photomicrographs show the characteristic cytology of vaginal smears obtained from rats (25) and mice (26). In rats the cycle stages were classified as proestrus (PRO), estrus (OEST or EST), early diestrus (ED) and late diestrus (LD) whilst in mice the authors subdivided the diestrus stage into metestrus (MET) and diestrus (DIST). Showing round-nucleated epithelial cells (e), larger, cornified cells (c) and polymorphonuclear leucocytes with distinctly lobed nuclei (li) or clumped nucleus (liii). Note magnification of mouse smears is lower than for rats but scale bar not available for mice. (C) Relative proportions of different cell types in vaginal smears at different stages of the estrous cycle in rat and mouse. From Cora et al. (27) adapted from the Byers and Taft (28) estrous cycle identification tool.
lymphocytes, keratinised squamous cells and leucocytes, the proportions of which vary throughout the cycle. The reader is referred to excellent descriptions and illustrations of cell types (30,31). During the cycle, four stages of unequal duration: proestrus, estrus and diestrus, the latter commonly subdivided into two stages, may be identified according to the appearance and relative number of different cell types, with a gradual rather than step-like transition between stages. For convenience, most laboratories conduct behavioral experiments during the animal's light period, having collected a vaginal smear in the morning of that day. Typically, in rats proestrus and estrus are assigned a day each, starting at midnight whilst diestrus, which lasts longer, is subdivided into two periods termed by different workers early and late diestrus, diestrus I and II or metestrus and diestrus ( Figure 1A).
Vaginal Cytology and Cycle Stage
Proestrus is readily identified by the presence of nucleated epithelial cells, which in smears stained with Giemsa or similar stains, purple nuclei are clearly observed within a blue cytoplasm (Figures 1B,C). Proestrus typically lasts around 14 h but within that time widow there are rapid changes in the hormonal profile. In rats maintained on a 12 h on 12 h off light-dark cycle (lights on at 06.00 h) used in many laboratories, progesterone secretion remains very low from midnight (00.00 h on the day of proestrus) until about 15.00 h in the afternoon when a rapid spike in secretion starts. Progesterone concentration peaks in the evening in the early part of the dark phase, then declines rapidly to basal level by around midnight (00.00 h) (24,32). Estradiol, which has been rising gradually over the previous 3 days, reaches peak concentration at around midday (12.00 h), and then declines, returning to basal level by the late afternoon (24,32). Thus, mornings are characterized by low progesterone and high estradiol whereas during the afternoon the surge in secretion of progesterone leads to the highest concentration achieved during the cycle, whilst estradiol concentration is declining rapidly (24,32). Given the rapidly changing hormonal profile during proestrus, the timing of behavioral experiments during the day of proestrus deserves consideration as a potential source of variability.
In estrus, which lasts 24-48 h the nucleated lymphocytes characterizing proestrus are replaced by large, keratinised squamous cells (Figures 1B,C). Secretion of progesterone and estradiol remain at a low stable level throughout estrus (24,32).
Diestrus is the longest lasting phase and the source of the most discrepancies in classification. The diestrus period is characterized by an abundance of leucocytes in smears (Figures 1B,C) but the number, appearance and presence of other cell types and of mucus in the smears varies. As mentioned above, most workers subdivide diestrus into two phases variously termed metestrus and diestrus; diestrus I and II, or early and late diestrus. These terms are not necessarily interchangeable and precise cytological descriptions of the criteria applied for classification are essential although regrettably, not available in all studies. At the beginning of diestrus progesterone secretion begins a progressive rise, which continues until the early morning of the second day when secretion terminates abruptly, precipitating a rapid fall in concentration ( Figure 1A). In contrast, estradiol remains relatively stable during this period ( Figure 1A). Rapid withdrawal from progesterone has been shown to trigger plasticity of GABA A receptor subtype expression that leads to significant changes in excitability of brain circuits associated with anxiety (33)(34)(35).
The Estrous Cycle-Mice
Increasingly, mice are being used for behavioral research in order to capitalize on the availability of an increasing number of genetically modified strains, which can help in defining the neurochemistry of emotional behavior. Female mice display similar, but not identical, changes in vaginal cytology to rats during their estrous cycle (Figures 1B,C). However, it is much more difficult to imply a causative link between changing levels of brain neuroactive steroid hormones and behavior in mice compared to rats. Diurnal fluctuations in adrenal secretion of progesterone and its metabolites, which undergo a surge in the dark period, far outweigh ovarian secretion rates (36). Moreover, the level of adrenal secretion of progesterone also appears to be estrous cycle stage dependent (37), unlike in the rat (38). Metabolism of progesterone in mouse brain also differs from the rat (39) but perhaps most importantly, the concentrations of progesterone and its metabolite tetrahydroprogesterone (TH PROG) in female mouse brain are caused predominantly by changes in the supply of endogenous brain progesterone (36), rather than peripheral sources.
Refinements and Alternatives to Vaginal Cytology
There is a pressing need to standardize classification of estrous cycle stages to facilitate comparisons between cycle-linked changes in behavior reported by different laboratories. Several attempts have been made to develop rapid, objective methods, particularly for use in mice. These include using non-stained smear material (27,40), modifications to staining methods (41) and application of deep learning technology for classification of smears (42). Alternatives to vaginal cytology have been proposed based on gross examination of the vaginal opening (43), changes in skin temperature due to activation of brown adipose tissue (44) and variations in vaginal wall impedence (45). To date, none of the latter methods has been widely adopted and vaginal cytology remains the gold standard for assessing estrous cycle stage. Avoidance of handling stress is also an important source of variability that needs to be considered. Vaginal lavage can lead to raised plasma corticosterone with associated deficits in spatial memory (46). In male rats, even the acute stress of handling in animals not habituated to the procedure, leads to raised brain concentration of progesterone (47). A similar effect may be produced in females. In skilled hands however, smear collection is minimally stressful and the stained smears provide a permanent record available for objective scrutiny by blinded personnel. Even so, classification of the diestrus stage remains a source of confusion between studies. For the purposes of simplicity, with the caveat that criteria for defining these stages may differ between laboratories, in this article we will use diestrus I to also include the phases termed by different authors metestrus, early diestrus and dietrus 1. Diestrus II encompasses stages termed late diestrus or diestrus 2. Diestrus refers to studies in which no distinction has been made between stages. There is clearly a pressing need for a universally agreed consensus on cytological criteria for estrous cycle staging to facilitate comparison between results obtained from different laboratories.
ANIMAL MODELS OF ANXIETY-BEHAVIOR OF FEMALES IN MALE MODELS
The emotional states of fear and anxiety have an adaptive value. Novel stimuli/situations present a potential threat to the survival of an individual. In small prey species the most pragmatic response is usually to escape or alternatively, to become immobile in order to reduce the risk of being detected. This strategy becomes maladaptive if it is applied indiscriminately in response to all novel stimuli since other behaviors essential for survival such as foraging or finding a mate, will be compromised. Instead, the animal needs to display a level of vigilance in order to detect changes in its environment and then to assess the level of threat and the risk before deciding whether a modification to its ongoing behavior is appropriate (48).
Behavioral tests in animals that are aimed to elicit fear and anxiety are broadly based on two tenets: (i) fear operates to move the animal away from danger. It involves fight/flight/freezing, and these defensive responses are more commonly evoked by unambiguous, immediate/proximal threats, such as the confrontation with a predator (49); (ii) anxiety refers to a preparatory response to possible future threatening events, especially in situations where there is conflict between different goals, such as between avoiding a potential threat and being attracted to food (50). It can be translated in behaviors such as risk-assessment, including the scanning of the environment and hyper-attentiveness to the potential threat, along with disruption of ongoing behaviors (51).
Animal models of anxiety and fear can be broadly grouped into two main subclasses: the first involves ethologically based paradigms based on an animal's spontaneous or natural (unconditioned) reactions (e.g., flight, avoidance, freezing, riskassessment) to stress stimuli that do not explicitly involve pain or discomfort but represent a threat to survival (e.g., exposure to a novel highly illuminated test chamber or to a predator). The second includes animals' conditioned responses established following exposure to stressful and often painful events (eg, electric footshock) (52, 53).
Below, we consider a number of behavioral tests that were developed in males, and the available information regarding the behavior of females in these tests. The findings are summarized in Table 1. The EPM is the most widely-used animal model for investigating the pathophysiological bases of anxiety, as well as for screening anxiety-modulating drugs and mouse genotypes. After nearly 40 years of use, with nearly 8700 publications listed as of May 2021 (PubMed, NIH National Library of Medicine), the EPM remains the gold standard against which other behavioral tests for anxiety are measured, at least in males (52, 56, 57). Even so, some have expressed reservations about the use of the EPM (20).
Tests Based on Unconditioned Threatening Stimuli
Despite its widespread use, relatively few studies have considered sex differences in responding, or the effect of the estrous cycle in females. The consensus based on the limited literature available, is that adult female rats behave in a qualitatively similar manner to males in the EPM but display overall lower levels of anxiety (58-61). Sex differences were not however, observed in young adult mice (C57BL/6NIA strain) although interestingly, aged females were found to be more anxious than males (62,63).
Once the estrous cycle is taken into account, the picture starts to become less clear. In rats, some investigators report reduced anxiety levels (i.e., a more time spent in the open arms of the maze) in proestrus/estrus compared to diestrus (16-18, 40, 64-70) whereas others fail to see estrous cycle-linked effects at all (16,19,61,(71)(72)(73). However, not all workers compare all stages of the cycle and in some cases results from two stages of the cycle have been pooled, which makes direct comparison between studies difficult.
The results from studies in mice paint a similarly confusing picture. Some studies report that mice in proestrus show more open arm entries and spend a longer time on the open arm of the EPM than diestrus females or males (66,68,74,75) whilst others found that mice in estrus spent longer on the open arm than mice in diestrus (76) and yet others were unable to detect any difference in performance between mice in estrus and diestrus I (defined as receptive stages) and diestrus II and proestrus (defined as non-receptive stages) (23).
In males, many factors have been identified that can influence responding and potentially lead to inconsistencies between results from different laboratories due to methodological differences. The importance of recognizing and carefully controlling testing conditions, particularly light level, has been highlighted (73,77,78). The age of the rat, circadian phase and light illumination level during testing have all been reported to influence the behavior of males (77, 79-84) but see (85) for an opposing view.
Responsiveness of females has not been subjected to the same level of scrutiny. There is however, some evidence that strain (particularly in mice), light level and circadian phase may affect responsiveness (26,65,86).
In terms of assessing influence of the estrous cycle, the difficulties in making comparisons between results obtained from different laboratories are further compounded by the fact that in many cases, only two stages of the cycle have been compared, typically proestrus and diestrus (not subdivided into the substages), whilst in other studies data from two phases is pooled e.g., proestrus/estrus, thereby precluding evaluation of the effect of an individual cycle stage. At present there is no consensus.
Elevated T-Maze
The EPM, discussed above, may be viewed as a mixed model of anxiety/panic, because it combines two defensive strategies: (1) inhibitory avoidance, when the animal is in the enclosed arms and refrains from entering an open arm and (2) one-way escape, when the animal retreats from one of the open arms to seek the enclosed arm. The unstable balance between the expression of these two types of responses could explain the inconsistencies of drug effects in males, mainly 5-HT-modulating compounds, frequently reported for the EPM (87)(88)(89). To circumvent the ambiguities of the EPM Graeff and coworkers (90,91) developed the elevated T-maze (ETM). The ETM differs from the EPM by sealing the entrance to one of the enclosed arms (Figure 2). As a result, it consists of three arms of equal dimension, one enclosed and two open, all elevated above the floor. The test allows the measurement in the same rat, of an approach-avoidance conflict-type response: inhibitory avoidance, which is related to anxiety, and an escape response which is related to fear/panic. When placed at the end of the enclosed arm, the rat cannot see the open arms until its head is poked beyond the walls of the enclosed arm. Because the open arms are aversive, the animal will learn inhibitory avoidance when repeatedly placed at the end of the enclosed arm and allowed to explore the maze. On the other hand, when the rat is placed at the end of one of the open arms it can move toward the safer enclosed arm, performing an escape response, termed one-way escape, which is associated with fear/panic attacks. Contrary to what happens in the enclosed arm, the latency to leave the open arm usually does not change with successive trials. Anxiolytic drugs (e.g., diazepam and buspirone) impair inhibitory avoidance acquisition, while leaving escape expression unaltered. Antipanic drugs such as the antidepressants fluoxetine or imipramine, or high potency benzodiazepines (e.g., alprazolam and clonazepam) inhibit escape expression [for a full account of the test see (92)].
The limited comparisons made between male and female rats in this test show no sex-differences for inhibitory avoidance acquisition or escape expression in Wistar rats (93)(94)(95). However, female Long-Evans or Sprague-Dawley rats show a deficit in avoidance learning compared to males, indicating a less anxious phenotype (96). Therefore, it seems that performance in the test may vary between different strains of rat.
With regard to the influence of estrous cycle phases, one study reported that female Wistar rats in diestrus II are slightly more reactive to the open-arms than males (i.e., they take longer to leave the enclosed arm but only in the first trial), indicating a higher anxiety level (93). However, this subtle effect was not replicated in a recent study with the same rat strain (95).
Open Field Test
Individual laboratory-based anxiety tests, which by their nature are artificial, probably reflect different facets of emotionality (97) and viewed in isolation, cannot provide a complete picture of an animal's emotional profile. To overcome this limitation, it has been proposed that using a battery of tests may provide a more reliable measure, at least in males (98,99).
Behavior in an open field or more correctly, a walled arena (Figure 2), is often paired with other tests and used as a measure of locomotor activity as well as anxiety. The former is typified as the total distance traveled in a given time, whilst time to re-enter the center of the arena, or time spent in the central portion of the arena is used as a surrogate for anxiety, reflecting the choice made by the animal as it pits the novelty of exploring a new environment against the risk of danger posed by leaving a safe area next to the walls.
Overall, female rats have been reported to be more active in the open field compared to males, showing greater ambulatory and rearing activity and defecating less than males (100)(101)(102)(103) and appearing to be less anxious about entering the central zone. However, the literature is far from consistent, with several examples of no difference in behavior between sexes. In mice too, a retrospective analysis of performance in the open field concluded the performance of female and male mice was equivalent (104). In male mice strain differences have been reported in locomotor activity in the open field as well as anxiety using the EPM (97, 105); sex differences in responding are evident within some strains although not others (97).
Pooling data from all females risks masking possible effects of estrous cycle stage on responding. Even so, when estrous cycle has been taken into account, the findings are equivocal. For example, rats in proestrus showed greater anxiolytic behavior than at other stages (66). Conversely, no differences were detected in anxiety-like or fear behaviors between proestrus and diestrus rats (106). Similarly in mice, some workers report proestrus wildtype mice BALB/cBy to make more entries and spend longer time in the central zone of the open field compared to their diestrus (stage not subdivided) counterparts (26,68) whilst others using C57BL/6J mice found that behavior in the open field remained stable across 4 phases of cycle (26,107).
A recent insightful study assessing behavior in the open field in rats has demonstrated how multiple variables can acutely modulate each other in different contexts, and highlights the importance of considering each of these factors. Miller et al. (108) observed independent interactions between the estrous cycle and novelty (experiencing the open field for the first time), estrous cycle and light, and novelty and light wherein each factor is concurrently influencing behavior. Novelty was found to obscure estrous cycle effects. Similarly estrous cyclelinked effects were not evident in experiments carried our under white light, which rats find aversive, but could be observed in experiments conducted under dim red lighting (108). Another factor that appears to influence responding is the size of the field. Female rats in a large open field (129 × 120 × 60 cm, dim red light 18lux) spent more time in the central zone and made more central zone entries than males (60). However, using smaller arenas [70 × 70 × 70 cm arena, light level not specified (58) and 54.5 × 80 × 33 cm arena, dim red light 18lux] (61) no sex difference was observed in either distance traveled or entries into the central zone. These apparently conflicting findings are a concern. However, given the limited visual acuity of laboratory rats, large arenas may be perceived to pose more of a threatening challenge than small ones. The higher level of exploratory behavior displayed by females compared to males in large arenas may therefore reflect a real sex difference in terms of intrinsic level of anxiety.
When the open field test is incorporated into a battery of tests designed to assess fear and anxiety, conflicting findings have been reported regarding sex differences and estrous cycle-linked effects on anxiety-like behavior in the same animals exposed to the EPM and to the large open field (59, 61,97,102,[109][110][111][112][113]. Importantly, an estrous cycle-related influence on behavior in the open field does not necessarily predict the behavior of the same animal in the EPM (58, 60,61,68,114).
The aforementioned studies present a confusing picture. Given the numerous factors that can influence responding in the open field and the EPM, it is likely that methodological differences between laboratories are major factors that contribute to the lack of consensus regarding sex differences or the effect of the estrous cycle. What is evident is that females behave in a qualitatively similar manner to males in both the open field and elevated plus maze tests but whether there are sex differences in responding remains an open question. In the studies that have found sex differences, females have generally displayed lower anxiety levels compared to males. This finding is in complete contrast to humans in whom the incidence of anxiety-related pathological states is not only higher in women compared to men but also the symptoms experienced by women are often menstrual cycle-related.
Light-Dark Transition Test
The light-dark transition model was developed by Crawley and Goodwin (115,116), based on the exploratory behavior of rodents in a two-compartment box, where one chamber is brightly lit and the other dark (Figure 2). In such conditions, mice and rats have a clear preference for the dark side of the box and the number of transitions made by them between the two compartments and the time spent in the brightly lit side have most commonly been used as indices of anxiety.
Although a reasonable number of studies in the literature has compared the behavior of males and females in these tests, very few have explored the impact of the cycle phases on the female response. The majority of the studies performed either with rats (117)(118)(119)(120)(121)(122)(123) or mice (63,(124)(125)(126)(127)(128)(129)(130) have failed to show sex-differences in this test. In some of these studies direct comparisons among strains (97,131,132) and/or age of the animals (133,134), which are critically-relevant variables, were performed but no sex-related effect was found. There are however, a few reports showing that females are more (135)(136)(137)(138) or less (139)(140)(141)(142)(143) anxious than males.
Regarding the cycle phases, female rats in proestrus or in estrus+proestrus are less anxious compared to the other phases (18,144,145), or with males (144,146). In the only study available in mice a lower anxiety level was detected in the proestrus and estrous phases compared to diestrus or to males (114). It is clear that pooling data from females can mask significant sex differences due to the influence of a changing hormonal profile during the estrous cycle.
kHz Ultrasonic Vocalizations
Rodents use a range of ultrasonic calls to communicate the presence of positive or negative emotional states and to coordinate social interactions (147). In male rats in a seminaturalistic environment (visible burrow system) the presence of a predator (domestic cat) stimulates animals to emit high frequency ultrasonic vocalizations (USV) at around 22 kHz (148). The 22-kHz USVs are thought to act as a warning to conspecifics since far fewer calls are made if the confrontation with the predator occurs when the rats are remote from their social group (149,150). Female rats also emit 22 kHz USVs but their calls are longer and more frequent than those made by males (149,151). Interestingly mice, which are also prey for cats, do not emit such cries in similar threatening situations (150). Adult mice do communicate utilizing USVs but at other frequencies, primarily in the context of social interaction (152).
In rats in the laboratory setting a number of stimuli have been identified that evoke innate defensive escape behaviors that include emission of 22 kHz USVs. These include mild restraint stress (153) (Figure 2); air puff (154,155); forced swimming (156); overhead looming stimuli simulating aerial attack (157) and unavoidable acute or repeated footshocks (158). The 22 kHz USVs emitted in these settings are widely believed to reflect a negative affective state akin to anxiety and fear (159).
Compared to the extensive literature on 22 kHz USVs made by male rats in laboratory-based tests, USVs in females remain largely overlooked. The limited available information suggests that females may be less responsive than males. In response to air puff stress females emit fewer 22 kHz calls than male rats although interestingly, freezing evoked by the same stimulus did not differ between sexes (9,155). Female Wistar rats submitted to a short period non-noxious restraint stress also emitted far fewer 22 kHz calls than male rats (153). However, within the female cohort used in this study there was a marked effect of estrous cycle stage. Females in their proestrus, estrus and early diestrus (diestrus I) phases emitted very few calls, but during the late diestrus stage (comparable to diestrus II) calls increased 5-fold, reaching a level comparable to males (153). In the air puff test Inagaki and Mori (9) also failed to detect differences between the 22 kHz USVs emitted by rats in proestrus and diestrus I. However, since responsiveness in other stages of the cycle was not investigated, it is not possible to conclude whether the estrous cycle impacted on this test. These findings do however contrast with reports that female Long Evans rats living in a seminaturalistic environment (the visible burrow system) made more frequent cries in the presence of a predator than males (149). Whether this reflects a strain difference, or an influence of the living environment is not clear.
CO 2 and Hypoxia Challenges
A wealth of evidence shows that respiratory challenges such as exposure to a high concentration of CO 2 or a low concentration of O 2 evoke panic attacks in humans (160)(161)(162); these stimuli have frequently been used as experimental tools to study panic disorder (161,163). Although the pathophysiological mechanisms of panic disorder remain unclear, there is compelling evidence linking this psychiatric condition to respiratory disturbances [for a review see (164)].
The use of respiratory challenges to model panic attacks in experimental animals has been less straightforward, and the results obtained raise doubts that a panic-like state was indeed evoked in these non-human subjects. Broadly speaking, in these analyses, conducted mostly in male rats and mice, different parameters, primarily autonomic indexes (i.e., arterial blood pressure and heart and respiratory rates) have been used to infer that an extreme fear response, and hence a panic-like state, was evoked (165)(166)(167)(168)(169). Investigation of the behavioral consequences induced by CO 2 inhalation or hypoxia has also been carried out in some cases but curiously, this has habitually been done after, and not during, exposure to the respiratory challenges [e.g., (170)(171)(172)(173)(174)]. Efforts have been made to investigate whether changes in these cardio-respiratory indices are sex-dependent, or influenced by the estrous cycle, but their results, as recently reviewed (175), have not been conclusive.
Our laboratory reported that Wistar male rats submitted to acute hypoxia (7% O 2 ) display a panic-like escape response (i.e., upward jumps to the border of the experimental cage) (Figure 2), which is reduced by treatment with standard panicolytic drugs such as fluoxetine and alprazolam (176). We also observed that these drugs are equally effective in reducing the number of escape attempts made by mice during exposure to a high CO 2 concentration (20%) (177), validating these two behaviorallyoriented tests for the study of panic-attacks in male rodents.
Recently, we have also validated the hypoxia model for use in females. We observed that exposure to 7% O 2 evokes paniclike escape behavior in both male and in female Sprague Dawley rats. However, in females, reactivity to this respiratory challenge was clearly dependent on the stage of the estrous cycle, being significantly higher in diestrus II, compared to other cycle stages or to males (178). This finding has an important translational value since women with panic disorder experience an increase in anxiety and panic symptoms during the premenstrual phase of the menstrual cycle (179,180), which corresponds to diestrus II in rodents.
Predator-Prey Interaction
Exposure to predators or stimuli related to them (e.g., predator odor) has been widely used to assess defensive behavior in rodents. The influential ethoexperimental studies conducted by Caroline and Robert Blanchard have long guided research in this field. Through the use of two ingenious test batteries, the Fear/Defense Test Battery (F/DTB) and Anxiety/Defense Test Battery (A/DTB) (149,150,181,182), these researchers addressed the pattern of defensive behavior expressed by male and female rodents exposed to these naturalistic threats. While the former battery has given information on the defensive behaviors displayed by rats to present, approaching predators (a live cat) (Figure 2), such as flight/escape, freezing and defensive attack, the latter investigates reactions to potential threat (e.g., cat odor), such as risk-assessment behaviors.
Overall, their results have shown that females are more defensive than males when confronted by these stimuli, and this is particularly common in situations involving potential, as opposed to actual and present, threat (148,183). Females, for instance, display more risk-assessment and avoidance behaviors than males do in response to cat odor (148).
It is noteworthy, however, that conflicting results have also been reported by other groups. Perrot-Sinal et al. (184) observed that exposure to cat odor increased the expression of riskassessment behaviors in both male and female rats, but with a significantly lower frequency in females. However, when animals were submitted to chronic restraint stress prior to testing, females displayed a higher incidence of these behaviors than males. This indicates that the anxiety/stress basal state of the animals before the test can influence the way they respond to predatory stimuli. On the other hand, exposure of rats to an odor stressor of a different predator (trimethyl thiazoline; the main component of fox feces) increased, in a sex-independent manner, the expression of defensive behaviors, such as risk-assessment activities and defensive burying (185). Interestingly, the similarity in behavioral responsiveness masked sexually dimorphic changes in cell proliferation and death in the hippocampal dentate gyrus (185).
More recently, Pentkowski et al. (186) investigated the impact of estrous cycle phases on the unconditioned and conditioned defensive responses of female rats to cat odor. They observed that rats in diestrus 2 displayed significantly higher levels of risk assessment responses during exposure to a cloth impregnated with cat odor than in estrus or proestrus phases. When 24 h later, the animals were reintroduced to the cage where the odor was presented (now having a control cloth, without cat odor, which served as a stimulus-paired cue) in order to explore the conditioned responses to the experimental context/cue, a significant increase in the defensiveness was observed in the animals previously exposed to cat-odor (i.e., increased time spent in risk-assessment activities and avoiding the cue), demonstrating aversive learning. In contrast to the initial exposure (unconditioned response), there was no influence of the cycle phases on the learned response.
Finally, it is noteworthy that besides the effects of predator odors on defensive behaviors, it has been shown that exposure of weanling female rats to cat odor, for 10 consecutive days, interferes with the maturation of the hypothalamic-pituitary gonadal axis, leading to a delayed vaginal opening and first estrus, besides disrupting estrous cyclicity (187).
Vogel Conflict Test
The Vogel test is based on the approach-avoidance conflict generated in rodents between an appetitive drive: to drink water after a period of water-deprivation, and the fear of doing so as water consumption is punished by electric shocks delivered either to the animal's paws or tongue (Figure 2). Since its introduction in 1971 (188), this test has been widely used for the screening of anxiolytic drugs and to unveil the pathophysiological bases of anxiety (189)(190)(191)(192)(193). As mostly inferred from studies with males, anxiolytic drugs, such as the benzodiazepines diazepam and chlordiazepoxide, consistently increase the number of punished responses (190,194).
As with other models, few studies have directly compared the behavior of male and females in this test. Overall, female rats (59,(194)(195)(196) and mice (197) exhibit a reduced number of punished responses compared to males, suggesting a higher anxiety level. However, this conclusion has been questioned by evidence showing that female rats may have increased sensitivity to pain, a lower shock threshold perception and exhibit reduced unpunished drinking responses compared to males. The latter effect, which indicates a different baseline water intake, was observed after controlling for body weight of both sexes, an important and normally overlooked confounding variable [for a review of these findings see (194)].
To date, only one study has addressed the impact of the cycle phase on female behavior. Basso et al. (194) failed to find any significant difference between cycle phases in the number of punished responses exhibited by adult Wistar female rats. They also reported that the cycle phases had no impact on the effects of the anxiolytic drugs tested in their study.
Conditioned Fear and Fear Potentiated Startle
Two commonly used indices of fear responses in male animals are based on the association of specific stimuli (cued or contextual) with stressful and often painful events (e.g., electric footshock) (52, 53). In conditioned fear responses (CF) (198), rats are trained to associate a conditioned stimulus (CS), (typically light or sound) with an aversive unconditioned stimulus (US; footshock). The animals are then re-exposed to the CS alone in a different context. Freezing in response to the CS is then taken as an index of conditioned fear (Figure 2). Fear potentiated startle (FPS) is a related test that measures the potentiating effect on the startle response to a loud sound, of presentation of a CS that has previously been paired with an aversive US (footshock) (199) (Figure 2). As a whole, female rats perform in a qualitatively similar manner to males in both tests and, although some workers find females less responsive than males (200)(201)(202)(203), others have failed to detect sex differences (107,153,(204)(205)(206)(207).
Similarly, inconsistent findings have been reported in mice. For example, depending on strain and precise experimental protocol, no sex difference in contextual fear conditioning (208); stronger context fear conditioning and more generalization to a similar context have been reported in females compared to males (209), whilst extinction of conditioned freezing to a tone was faster in males than in females (210). Using a serial compound conditioned stimulus (tone and white noise that elicits clear transitions between freezing and flight behaviors within individual subjects) females exhibited more freezing behavior than males although there was no difference between the sexes in flight behavior (211).
When estrous cycle phase has been considered, the consensus from the limited number of available studies is that it does not influence expression of fear-potentiated startle (106,107) although a more recent study presents a conflicting view (212), nor does it impact on conditioned fear to context (153,204,205,213). In contrast, in tests of cued fear (conditioned freezing) Milad et al. (206) found that extinction training during the proestrus phase (high estrogen/progesterone) was more fully consolidated, as evidenced by low freezing during a recall test. Others reported weaker extinction during training in rats in diestrus II (212), or in the diestrus phase [not subdivided (213)] compared to proestrus i.e., rats continued to respond to presentations of the unreinforced CS for longer during the test session compared to rats in proestrus in which the response extinguished rapidly.
A consideration when using tests that involve a learning element, is that they take several days to complete and typically involve two or more sessions. This means that females may be conditioned in one stage of their estrous cycle but tested on another day, when they are in a different stage. There is a possibility that cycle stage during conditioning (or during the test session) may impact on responsiveness during the test session, and vice versa. In the limited studies that have addressed this question, estrous cycle phase seems not to impact significantly on the training or testing components in the conditioned fear paradigm (204,212). On the other hand, gonadal status does affect fear potentiated startle. Females tested in proestrus after conditioning in proestrus or diestrus II of the previous cycle, appeared initially to fail to distinguish between a positive and neutral conditioned stimulus, although performance improved as the test session progressed (212). Circulating estrogen levels are high in proestrus and estradiol has been shown to promote fear generalization to context (214)(215)(216). An apparent failure in discriminatory learning of rats tested during proestrus, may in fact reflect generalization to positive and negative conditioned stimuli, rather than a failure of learning (212). In another recent study employing startle, but in a fear safety conditioning paradigm, female rats in diestrus I or II had significantly reduced safety memory compared to females in the proestrus or estrus phase (217).
It is interesting to speculate why some tests of conditioned fear are affected by estrous cycle stage and not others. Fear potentiated startle differs from other commonly used tests of fear by measuring enhancement rather than suppression of ongoing behavior (199). It may be that this factor renders the test more sensitive to the effects of hormonal fluctuations in females. The above examples do, however, emphasize not only the importance of drug testing in both males and females, but also the choice of behavioral test. In addition, they highlight that neglecting the influence of the estrous cycle in females may lead to erroneous interpretation of data in some behavioral tests.
Tests Based on Fear Extinction
In recent years much attention has been focussed on extinction of conditioned fear responses. In humans, deficits in extinction of conditioned fear during repeated presentation of an unreinforced CS are considered a contributory factor underlying anxiety disorders (218,219). Anxious individuals show more elevated fear responding to a CS during extinction relative to healthy controls (220). Patients with posttraumatic stress disorder (PTSD) also continue to exhibit a robust conditioned fear response even after undergoing extinction training (221). In animals, fear extinction: the decrement in conditioned fear responses that occurs with repeated presentation of an unreinforced conditioned fear stimulus (Figure 2), may therefore provide a useful model to help understand the underlying psychopathology of anxiety states (222). Deficits in the extinction of fear memory and the way this impacts on subsequent interpretations of and reactivity to sensory events may be at the core of PTSD. It is worth noting however, that in classical Pavlovian terms, extinction implies gradual waning of the conditioned response as a consequence of non-reinforcement of the conditioned stimulus (CS). In PTSD, the individual does not usually experience the exact CS again. Rather, they appear to generalize to the original traumatic event so that other stimuli act as a CS and trigger an aversive reaction. Nevertheless, the use of extinction recall/retention following fear conditioning has gained currency in conditioned fear models of PTSD.
PTSD is twice as common in women than in men (223); moreover in women, menstrual cycle phase has been reported to influence extinction retention (224). The importance of incorporating females into animal tests of fear extinction cannot be overstated. In rats, the phase of the estrous cycle prior to extinction training (testing the response to the unreinforced CS 24 h after training) can influence extinction recall 24 h later. Rats that underwent extinction learning in diestrus I displayed poorer retention of extinction compared to animals undergoing extinction learning in proestrus (206).
As females are gradually incorporated into experimental protocols, it is becoming clear that even when sex differences in behavior are not detected, the same outward behavioral response may be mediated by different mechanisms. A pertinent study utilizing conditioned fear in mice reported similar levels of fear extinction in males and females (215). However, the similarity in behavior between the sexes belied differences in the underlying pharmacology. Whilst in males, extinction and subsequent renewal of fear were enhanced by administration of a presynaptic GABA B receptor antagonist, females were unaffected (215). In a similar vein, although no sex differences could be distinguished in freezing recall in rats tested in a contextual fear paradigm, significant upregulation of the early gene ARC was detected in the bed nucleus of stria terminalis in males but not in females (225). Sexual dimorphism with respect to the involvement of endocannabinoid pathways in conditioned fear extinction has also been reported in rats (226).
The Risky Closed Economy
The classical tests of fear and anxiety behavior in rodents assess specific behaviors (e.g., freezing) during brief sampling periods and in an artificial laboratory setting, providing only a "snapshot" of fear and anxiety-related behaviors. This limitation has driven a search for more ethologically relevant settings in which to study fear and anxiety-like behaviors. In the Risky Closed Economy (RCE) (227) animals live undisturbed, although individually housed, in a semi-naturalistic environment where they are free to acquire their food and water by lever-pressing in a designated foraging zone (Figure 2). An unsignaled, unpredictable threat (footshock) is introduced into the foraging zone to model the risk of predation.
Arguably, this test should afford a more holistic understanding of the effects of fear and anxiety on a day-to-day basis since data from a multitude of variables can be collected automatically and continuously over several weeks to months (228). When applied to females, the RCE also has the advantage of being able to follow the same animal at different stages of its estrous cycle. In terms of foraging behavior in the RCE, female rats were more fearful than males. Moreover, estrous phase appeared to influence risky foraging decisions, with increased risk taking associated with the proestrus phase (227). This finding is significant since unlike most of the behavioral tests employing less ethologically relevant scenarios, the increased level of fear or anxiety seen in female rats in the RCE during the diestrus phase parallels the human experience.
SEX DIFFERENCES IN RESPONSES TO PSYCHOACTIVE DRUGS
It is becoming increasingly clear that male and female brains do not necessarily utilize the same neural mechanisms to achieve the same behavioral output. Moreover, as females are gradually introduced into drug testing protocols, evidence is accumulating showing sex differences in drug responsiveness as well as a differential responsiveness within females depending on the stage of the estrous cycle. Sex differences in responding to the classical anxiolytic benzodiazepines have been recognized for many years. Females are generally considered to be less responsive than males. However, the results must be viewed with caution. For example, the absence of female responses to the effects of diazepam in the EPM was found to be due to the high baseline activity levels seen with females, rather than a differential response to the drug (229). Such findings highlight the need to consider sex differences in baseline behaviors to allow for unambiguous extrapolation of results. In addition, sex and strain differences in metabolism of benzodiazepines have been reported, which can also bias results (230) although no sex difference in brain concentration was found, at least in Long Evans rats (231).
The estrous cycle also impacts on responsiveness to benzodiazepines, although once again, findings are equivocal.
Most workers fail to investigate all stages of the cycle, which leads to incomplete data sets. Some studies in rats were unable to detect any estrous cycle linked differences in responsiveness to diazepam (72,232). However, in the EPM the overall consensus is that rats and mice are more sensitive to diazepam during the proestrus/estrus phases compared to diestrus, especially diestrus II (16,19,(233)(234)(235). Similarly, in the light-dark transition test Rodriguez-Landa and coworkers (18) found that diazepam caused anxiolytic effects in female Wistar rats in proestrus or estrus phases, but not in their diestrus phase. This may be a consequence of higher binding of diazepam to brain membranes in proestrus compared to the other cycle stages (236).
Sex and estrous cycle-linked differences in responsiveness to other anxiolytic drugs have also been reported in other behavioral tests. Whereas, no sex difference was observed in the effect of the serotonin and noradrenaline reuptake inhibitor sibutramine in rats tested in the EPM, sibutramine impaired inhibitory avoidance (withdrawal from the enclosed arm) in the ETM in males but not females, but inhibited escape expression (latency to leave the open arm) in both sexes (95). When the estrous cycle was taken into consideration, the antipanic-like effect of the drug on escape performance was found to be absent in females in diestrus II but preserved in the other cycle phases (95).
Females and male rats also differ in their sensitivity to the effects of anxiolytic drugs in the Vogel conflict test. Whereas, an increase in punished responding was observed to be equal in both sexes after acute administration of diazepam and chlordiazepoxide, anxiolysis caused by buspirone, fluoxetine, paroxetine, or propranolol was evident only in males. Moreover, female rats seem to be more sensitive to the sedative effects of buspirone and chlordiazepoxide than males (194). In another recent study employing startle in a fear safety conditioning paradigm, female rats in diestrus I and II had significantly reduced safety memory compared to females in the proestrus or estrus phases. This difference could be reversed by intranasal application of oxytocin (217) although interestingly, oxytocin had no effect in males (217).
Sex and estrous cycle stage also impact on responsiveness to the widely used serotonin reuptake inhibitor fluoxetine. Chronic fluoxetine impaired inhibitory avoidance in a one-trial stepthrough task in male but not female mice (237). In rats chronic (14 day) administration reduced fear responses during extinction learning and extinction recall in female rats in diestrus I and II but not in proestrus/estrus females or in males (15).
Chronic administration of fluoxetine is normally required for anxiolytic effects to develop. In contrast, acute administration evokes anxiogenic effects independent of sex (15,193). However, at low doses that are subthreshold for its effects on 5-HT systems, fluoxetine can be anxiolytic in females, and this effect is dependent on estrous cycle stage. Administration of low dose fluoxetine in diestrus 2 was able to completely reverse the increase in unconditioned fear that characterizes this stage of the cycle. Thus, fluoxetine in diestrus 2 reversed the increase in restraint stress-induced ultrasonic vocalizations; hypoxia-induced escape behavior and vibration stress-induced hyperalgesia that characterize this stage of the cycle (33, 154, 178a) but had no effect when administered at other stages cycle (33). Fluoxetine also normalized the increased excitability of the panic circuitry in the periaqueductal gray matter that occurs during diestrus 2 (33) and restored responsiveness to diazepam in the EPM (19). These effects are thought to be related to the rapid steroid-stimulating properties of fluoxetine, which raises brain concentration of allopregnanolone and offsets the natural sharp decline that occurs at diestrus 2 (33,238).
DISCUSSION AND CONCLUSION
The available data indicates that females respond in a qualitatively similar way to males in the majority of behavioral tests used to assess fear and anxiety in male animals. The overall conclusion from the behavior of females in "male" models of fear and anxiety is that females show lower levels of anxiety compared to males ( Table 1). Yet this finding is in direct contrast to the clinical experience where the prevalence of anxiety-linked disorders is higher in women than men. It is worth remembering however, that the commonly used animal tests model the adaptive states of fear and anxiety and not the psychopathology which characterizes human anxiety states like panic disorder, generalized anxiety disorder and post traumatic stress disorder. It may be that a lower intrinsic (baseline) level of anxiety in females compared to males is normal in rodent societies and should not be a concern when investigating the biological basis of anxiety behavior.
Since the readout of animal tests is mainly locomotor based, the overall higher level of activity in females could bias the result. However, recent careful analysis of locomotor activity in three tests of anxiety: EPM, open field and social interaction failed to detect an influence (61). It may be that instead of expressing less anxiety, female rats express different forms of anxiety-like behaviors that are not well-captured by the testing procedures that have been developed and characterized using male rodents. The readouts of many common behavioral tests developed and validated in male animals, may therefore need to be adjusted in order to assess the same emotional states in females. A case in point is the classic fear conditioning paradigm whereby animals freeze in response to a conditioned stimulus or to a context signaling footshock. Overall, males froze more than females, but a subset of females were more likely to engage in "darting" behavior (226,239,240), which could not be attributed to overall hyperactivity.
A caveat to analysis of female behavior in the traditional "male" animal tests should be a consideration of the extent to which the current animal models of fear and anxiety do actually model these emotions in humans (241). Indeed, it has been questioned whether current fear conditioning studies in rodents operate in the natural world (228) since unnatural tasks performed by rodents living in standard laboratory conditions may not model their behavior in the wild, where the living environment and challenges to survival are quite different. For example the impact of housing conditions on rodent brain and behavior (242) is well-established and has led to the adoption of various degrees of enrichment into laboratory housing conditions for rodents. In humans, an ethological approach to fear has been successfully incorporated into experimental research paradigms (243). Animal studies lag behind in this respect, although the use of the visible burrow system, first demonstrated 30 years ago (148), which enables observation of behavior of rats living in mixed sex colonies, was an early pointer to the effect of environment on fear-associated behaviors. A major concern in terms of translational validity of most currently used tests is that females display lower levels of anxiety in "male" models, whereas anxiety-related psychopathology is far more common in women than men. However, in more ethologically relevant situations such as the risky closed economy, in an open field with cover, or when housed in a visible burrow system, females appear more anxious and risk averse than males (148,183,227,244).
In women with anxiety disorders including panic and PTSD, anxiety, fear, and avoidance symptoms tend to increase during the premenstrual phase when progesterone is declining rapidly and estrogen is low (245,246). In this respect the observation that responsiveness in tests of unconditioned fear behavior in rats mimics the clinical experience is pertinent, especially in the light of findings that the menstrual cycle in women influences principally emotion with limited effect on cognitive function (247). In female rats, unconditioned fear is significantly enhanced in diestrus 2 (33,155,179,187) (similar to the premenstrual phase in women) whilst the cycle has inconsistent effects in tests employing conditioned threatening stimuli, which involve a learning component. The adverse symptoms experienced in the late luteal (premenstrual) phase may be considered an inappropriate over-reaction to everyday psychological stressors, which at other stages of their cycle do not trigger an adverse response. The clinical literature supports the hypothesis that premensrul dysphoric disorder pathophysiology is rooted in impaired GABA A -receptor response to dynamic fluctuations in allopregnanolone across the menstrual cycle, manifesting as affective symptoms and poor regulation of the physiologic stress response (245).
The importance of including females in all drug discovery protocols from the level of basic science using animal models to clinical trials in humans cannot be overstated. Greater standardization of experimental psychopharmacology protocols is required, in order to facilitate the search and characterization of novel anxiolytic agents for both sexes. Whilst females appear to respond in a qualitatively similar manner in most behavioral tests developed to model fear or anxiety in male rodents, it is becoming increasingly clear that male and female brains do not necessarily utilize the same neural mechanisms to achieve the same behavioral output. Moreover, behavioral responsiveness and drug action in females may be influenced by the changing chemical milieu of the brain during the estrous cycle. Drug development must be tailored to include female psychopharmacology with careful consideration of appropriate behavioral tests. | 2021-09-04T14:08:11.212Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "78a946b03e30a35e6a4e115e160a0a3169a6789a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.711065/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78a946b03e30a35e6a4e115e160a0a3169a6789a",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12223251 | pes2o/s2orc | v3-fos-license | Dynamic Contrast Enhanced MRI Can Monitor the Very Early Inflammatory Treatment Response upon Intra-Articular Steroid Injection in the Knee Joint: A Case Report with Review of the Literature
Dynamic contrast-enhanced MRI in inflammatory arthritis, especially in conjunction with computer-aided analysis using appropriate dedicated software, seems to be a highly sensitive tool for monitoring the early inflammatory treatment response in patients with rheumatoid arthritis. This paper gives a review of the current knowledge of the emerging technique. The potential of the technique is demonstrated and discussed in the context of a case report following the early effect of an intra-articular steroid injection in a patient with rheumatoid arthritis flare in the knee.
Background
Imaging modalities aiming to identify perfusion characteristics in inflammatory joint disease are receiving increasing attention after results from a recent publication have shown that measures of perfusion detected with ultrasound Doppler in the wrist joints of rheumatoid arthritis patients with low disease activity scores (DAS28) had the highest predictive value of future erosive outcome [1] compared to both clinical measures and conventional contrast enhanced MRI.
dynamic contrast-enhanced MRI (DCE-MRI) is such an imaging technique based on sequential acquisition of rapid MRI sequences before and during the infusion of a contrast agent. It can be used to evaluate synovial activity in patients with rheumatoid arthritis (RA) and has been shown to correlate closely to synovial vascularity and inflammation [2][3][4]. An enhancement curve is obtained, where the initial rate of enhancement and the resulting plateau and potential washout depends on the inflammatory vasodilation, neoangiogenesis, and perfusion. The early enhancement rate determined by DCE-MRI has shown to be more sensitive to change after intraarticular steroid injection [5] and has a closer relation to histological inflammatory activity than measures of the synovial volumes [4,6], making DCE-MRI a promising tool for assessing the early inflammatory response to treatment, potentially even before volume changes, and thus changes in the semiquantitative synovitis score occur [7].
DCE-MRI has been tested on low-field [8] and high-field [4,6] scanners and is capable of discriminating patients with clinically active disease from those in remission.
Conventionally, DCE-MRI data is analysed using region of interest-(ROI-) based technique, where a small, few 2 Arthritis millimetre ROI is placed in the most enhancing part of the synovium, as perceived by an observer [8]. It has been shown that the size and position of ROI have a great impact on diagnostic accuracy and ROI misplacement by only a few millimetres might give a 20%-30% difference in the results [9]. Thus, ROI-based methods generate highly subjective and potentially unreliable results. Finally DCE-MRI data is influenced by micromovements of the imaged joint introducing artifactual enhancement, which results in large variation in the mean dynamic curves obtained by the ROI method [10].
These issues have been addressed by application of a new technique for analysis of dynamic data developed by Kubassova et al. [11,12]. This approach is based on a fully automatic voxel-and model-based analysis technique with built-in movement correction, which improves signal to noise ratio up to 3-fold by taking out inter-scan patient motion artefacts. Application of this technique for analysis of dynamic data can solve most of the above-mentioned technical issues, making DCE-MRI a more robust and even more promising tool for assessing the early response of inflammation to treatment.
Objective
To use DCE-MRI data to monitor early changes in parameters of knee joint inflammation in a patient with a flair of RA following ultrasound-guided intra-articular injection of glucocorticoid (methylprednisolone acetate 40 mg/ml). The case will serve as an example of the technique and the changes seen will be discussed and explained in detail in order to give the reader a better understanding of the potential and pitfalls of using computer-aided analysis of DCE-MRI data. We hope that this paper could serve as an example of the potential of this methodology that can be further investigated in future larger studies.
Clinical Information.
This 52-year-old lady was affected by seropositive RA diagnosed 13 years before. The patient had side effects with several DMARDs, including methotrexate and was treated with prednisolone, 5 mg daily. Supplementary injections of methylprednisolone were given occasionally in joints with acute flares; the last intra-articular injection was performed in a wrist joint 10 months before the present treatment.
Clinical findings at baseline included a moderately swollen knee and slight-to-moderate joint pain with a 100 mm visual analogue scale of pain of 30 mm at rest and 50 mm on joint movement. Joint aspiration yielded 25 cc of clouded synovial fluid, and after arthrocenthesis, 1.5 ml glucocorticoid methyprednisolone 40 mg/ml was injected in the lateral recess of the knee with almost complete resolution of symptoms within day 2 of injection and complete clinical remission at day 7. The effect lasted for 2 months. The patient had normal kidney function measured by serum creatinine and estimated glomerural filtration rate (e-GFR).
Imaging.
After informed and written consent, the patient had conventional static MRI as well as dynamic MRI performed on day 0, 1, 2, and 7 using a 0.2 T musculoskeletal extremity scanner (Esaote E-scan). The patient was examined in supine position with the knee positioned centrally in the receive-only cylindrical solenoid knee coil. The following pulse sequences were applied: gradient-echo scout, sagittal STIR (TR/TE/TI: 1310/24/85, fov/matrix: 200 × 170 mm/192 × 163, slice thickness 4 mm) and axial 3D T1 gradient echo (TR/TE: 38/16, fov/matrix: 180 × 180 × 100 mm/192 × 160 × 72, slice thickness 0.8 mm). After these images were acquired, an intravenous injection of 0.1 mmol/kg body weight Gadolinium-DTPA (Magnevist, Schering AG, Berlin, Germany) was administered over a period of 30 seconds. At the time of Gadolinium injection, 30 consecutive 5 mm axial gradient echo dynamic MRI (DCE-MRI) images (TR/TE 60/6, FOV/imaging matrix 160 × 160 mm/256 × 128) in three prepositioned planes were started and obtained every 10 second, covering the superior, medial, lateral, and posterior joint recesses in the knee. Image time was 300 seconds. Finally, the static axial 3D T1 gradient echo sequences were repeated. The acquisition time of each sequence ranged from 4 to 8 minutes, with one signal acquired. Total imaging time was 30 minutes.
Image Analysis.
The conventional static imaging data was displayed using an AGFA PACS system (Figures 1 and 4(c)-4(d)). The STIR sequence ( Figure 1) was used to evaluate the bone marrow and effusion. The pre-and post contrast 3D T1-w gradient echo images were used to evaluate synovitis using a previously published semiquantitative scores [7].
The dynamic enhancement pattern in the inflamed knee synovium was analyzed using the software Dynamika-RA (http://www.dynamika-ra.com/). Using this software, we reduced patient motion artefacts between the dynamic frames, which allowed reduction in artifactual enhancement, thus increasing the SNR by a factor of 2 (data not shown). The motion correction procedure took 3-4 minutes.
Further, the data was analysed using the voxel-by-voxelbased approach, incorporated into the software, and the enhancement characteristics of each voxel was computationally mapped to one of 4 enhancement models [13]. Parametric maps of the Gadolinium uptake pattern (Gd), maximum enhancement (ME), initial rate of enhancement (IRE), and time of onset of enhancement (Figures 2(a)-2(d)) are automatically calculated, and the corresponding colours, representing the vessels perfusion and the synovial microcirculation, are superimposed on the gray scale dynamic precontrast T1-weighted image. The colours in the Gadolinium map reflect the behaviour of the Gadolinium over time, where voxels with no gadolinium uptake have no colour; voxels with persistent pattern of enhancement are shown in blue; voxels with plateau in green and voxels with washout pattern in red. In the ME and IRE maps, the most active contrast enhancing voxels are displayed in white to yellow colours, whereas tissues with less perfusion/inflammation are reddish ( Figure 2) [13].
Understanding the Dynamic Enhancement
Maps. The vertical colour bars or the Y-axis in the 4 enhancement maps displays the values of the chosen parameter (ME, IRE, Tonset, and Gd). The values are measured in each voxel, and then grouped into 10 equally spaced bins. These are displayed on the colour bar. ME shows the increase over a baseline in a particular voxel and ME is measured as a ratio between the baseline and the maximum enhancement of the enhancement model calculated by the software program. The IRE values show the increase in voxel intensity per second from time onset to maximum enhancement is reached. Tonset is measured in seconds and show the time in seconds, where the enhancement curve begins compared to the first baseline frame. Gd washout-red, plateau-green, and persistent-blue shows the pattern of enhancement in each particular voxel.
The horizontal colour bar shows the number of voxels and their percentage of the total in ( ) for each statistic in the corresponding IMAP, for example, 1 (0.01%) or 248 (91%), and so forth. For more information, visit http://www .dynamika-ra.com/.
After the movement correction and the fully automatic analysis of the knee joint was performed, several regions of interest (ROIs) were drawn (Figures 3 and 4): (1) a fast rough box ROI around the anterior part of the knee including the synovial membrane and excluding the major vessels; (2) a box ROI around the popliteal artery in the posterior part of the knee; (3) an oval ROI within the semimembraneous muscle (Figure 4(a)). An example of the maps of ME, IRE, and the corresponding static postcontrast 3D T1-w gradient echo images over time after intra-articular steroid injection are displayed in Figure 4. These maps serve as a guide to visually evaluate the effect of the steroid injection from baseline through day 7.
Results
The conventional STIR images (Figure 1) showed an evident signal decrease from the patients joint cavity between baseline and day two and a smaller signal reduction in the suprapatellar recess between day two and day 7. This signal reduction may be ascribed to significant decrease in joint effusion from a score of 2 to a score of 0 over time with maximum effect at day 7 [7]. Bone marrow oedema was not present. The corresponding postcontrast T1-weighted gradient echo images showed that the synovial enhancement (arrows Figure 4(c)-4(d)) was unchanged corresponding to a synovitis score of 1 [7] even though the volume of the enhancing synovium was visually reduced on day 7 (image not shown).
Automatic
Analysis. ME and IRE statistics, extracted from the dynamic data of this case and generated for the whole joint, showed no significant changes in the days following the steroid injection (Table 1). However, all Gadolium-related parameters, such as the total number of enhancing voxels (N-total), the number of voxels with wash-out (N-washout), and plateau (N-plateau ) pattern of enhancement, showed significant changes before and after treatment (Table 1). In contrast, an increase of IRE was noted at day 7.
ROI Analysis.
We further outlined a rough ROI positioned to include the synovial membrane and to exclude the larger vessels especially behind the knee joint. There was no need to position ROI precisely, as the measurements were only done on the enhancing voxels inside the ROI (Figures 3 and 4).
The IRE of the roughly outlined synovial ROI decreased from baseline values over the first two days by a factor of 4 and stayed in the low end at 1-week followup. The mean ME showed no significant reduction ( Table 1).
The dynamic curves and corresponding enhancement statistics from the vessel ROI including the popliteal artery Table 1 Body part Acq. Date remained relatively unchanged over time but showed a day to day variation (Table 1). In order to normalize the ROI data, we multiplied the sum of N-persistent and N-washout with the mean ME and mean IRE in all ROIs. This gave a much clearer treatment effect in the data revealing a significant reduction from baseline through day 7. We saw an effect even on day one with the most pronounced change between day one and two (Table 1).
Discussion
When static post contrast T1 weighted MRI is used to monitor, the early inflammatory treatment response in pa-tients with RA a change of up to 30% in enhancing volume is needed to imply a one step change in the inflammation score [14]; accordingly, this method is relatively insensitive to monitor the early treatment response upon anti inflammatory treatment.
In contrast, DCE-MRI seems to be highly sensitive to the early treatment response, but even though the methodology of DCE-MRI has been known for several years, previous studies have reported problems with reproducibility of results due to large variations in the ROI analysis [9]. The current case illustrates that DCE-MRI analysed using an appropriate computer software seems to be capable of detection and quantification of the very early treatment response and that the observed changes upon treatment in this case occur in parallel with changes in clinical symptoms. We have used a single case to illustrate the potential of the technique, but before final conclusions concerning broader utility of this method can be made, results from larger patient cohorts are warranted, and there are some pitfalls and technical challenges we need to understand as well.
Fully automated data analysis of the whole joint revealed that the mean IRE and ME did not change significantly over time even though the number of enhancing voxels showed a dramatic decrease between day 1 and 2. The reason for this seems to be due to the confounding effect from the large vessels behind the knee, where the values of ME and IRE are the highest; thus, the enhancement changes in the synovial membrane are "shadowed" by the activity in the neighbouring vessels. On the other hand, making a rough ROI surrounding the synovial membrane, and thus removing confounding influence from the major blood vessels revealed a significant treatment response in the slope of the ROI curve that decreased by a factor of 4 between baseline and day 2 and remained in the same lower range at one-week follow-up. Based on these observations, we recommend to exclude the larger vessels from DCE-MRI analysis of the knee joint, which can be done by using a rough ROI and an appropriate software tool.
The interpretation of changes in DCE-MRI following intra-articular steroid administration is also potentially confounded by the heterogeneity of treatment response across the whole synovial tissue mass. Thus, as expected, the ME did not change following treatment, since the remaining voxels demonstrating enhancement in the followup examinations reached approximately the same level as observed in pretreatment images. However, it took a longer time to achieve the plateau because of a lower steepness of the slope (Figure 3). In order to better interpret the changes in DCE-MRI images to reflect a true biological effect of treatment intervention, despite the regional variation in synovial responses which account for the lack of change in ME in the ROI as a whole, we normalized the data by multiplying the mean of ME and IRE by the number of voxels with plateau and washout pattern of enhancement. This permitted a much clearer statistical differentiation between the data acquired before and after the treatment. In our experience, all voxels, which reached the plateau or washout phase, seem to represent the areas with synovial and vessel perfusion, and we have chosen to use the sum of Nplateau and N-washout in this normalisation. In contrast, those tissues with a persistent pattern of enhancement (Npersistent) are often located in the skin area or are due to very fine movement artefacts that were not removed by the motion reduction algorithms. Whether this approach can be used in general needs to be clarified in larger studies.
As the software uses a model-based enhancement classification, there is no need to apply a threshold, nor do we recommend to normalise the data to the enhancement characteristics of the vessels or the muscles, because there seem to be a relative large day to day variation in the ROI statistics.
We have examined the current patient 4 times within a week to measure the effect of the steroid injection. This approach cannot be recommended for routine clinical use for many reasons, including the use of i.v Gadolinium, expensive and time-consuming MRI examinations, availability of MRI scanners for RA patients. The case should be seen as an example of the potential of the technique, but based on our results which have to be confirmed in larger studies, we speculate that there could be a benefit of using the technique to get a more objective idea of the early treatment effect of the more expensive biologic treatments, that is, within 2-4 weeks of treatment, which could lead to better patient care by reducing the time spend on an ineffective treatment and in the long run money could be saved on the health economy budget.
This study has several limitations, as our findings are based on a case report, and we have used the knee joint to illustrate the potential of the technique, where ROI-based exclusion of the larger vessels is fairly easy. We cannot assume that our findings can be extrapolated to smaller and more complex joints like the wrist. The therapeutic intervention employed in this case was an intraarticular steroid injection known to have a potent anti-inflammatory activity, and we cannot assume that an observable treatment effect would be as pronounced and as rapid when using conventional DMARDS or even biologic therapies.
Conclusion
In conclusion, DCE-MRI in conjunction with analysis using appropriate software seems to be a highly sensitive tool for monitoring the early inflammatory treatment response in 8 Arthritis patients with RA, as demonstrated by the assessment of a knee joint inflammation following an intra-articular steroid injection. The decrease in IRE and ME at day two followup in this case example, especially seen in the normalized data, corresponded to improvement in the patient's clinical symptoms. These findings have to be further tested in larger clinical trials on several joints to see whether the observed benefit in the current case using dynamic MRI may be used in general as a sensitive biomarker to track the early treatment response in patients starting potent anti-inflammatory treatments such as local/systemic steroid, and/or biologics. | 2016-05-04T20:20:58.661Z | 2011-03-17T00:00:00.000 | {
"year": 2011,
"sha1": "4ffd6c34af791afc190ce396892820d8a645630c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2011/578252.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ffd6c34af791afc190ce396892820d8a645630c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34623339 | pes2o/s2orc | v3-fos-license | Performance Evaluation of Trickling Filter-Based Wastewater Treatment System Utilizing Cotton Sticks as Filter Media
The need of wastewater (WW) treatment is increasing along with the production of WW and its disposal without treatment. With a smaller footprint, ease of operation, and relatively less cost, trickling filter (TF) wastewater treatment systems have been considered to be more adoptable for domestic and industrial WW treatment in underdeveloped and/or developing countries – particularly for Asia and Africa. A relatively lowcost and operationally effective TF wastewater treatment system was developed using farm waste cotton sticks as biofilm support media. During the operation of the TF system, flow rates vary from 1.7 to 4.6 m3/hr. The attained removal efficiency for BOD (biological oxygen demand) was 69-78% and for chemical oxygen demand (COD) was 65-80%. The solids removal in TF system was 38-56% for total suspended solids (TSS) and 20-36% for total dissolved solids (TDS). Other aggregates such as turbidity and color removal were 32-54% and 25-42%, respectively. Four to five months of trouble-free operation of the developed TF system indicated the robustness and reliability of the system. Cotton sticks appeared to be a degradation-resistant alternative filter media for the TF system. Moreover, it is useful for reducing potential impacts of WW re-use at the farm level. Treated effluents through the TF system can be re-used as an irrigation water supplement in under-developed and/or developing countries.
Introduction
The resources of fresh water and population are distributed in an irregular manner on planet Earth. Water supplies are less than the water demand for about 40% of popularly placed areas around the world. The regions under water scarcity are facing many bad impacts on their development and ground lifestyle due to limited or no access to freshwater. About 60% of the world's population may potentially face water scarcity in the next 10 to 20 years. The countries in Asia and Africa are rapidly losing their surface water resources [1][2][3][4][5][6][7].
The global projected amount of wastewater (WW) production was 450 x 10 9 m 3 per year in 2010 [8]. Total production of wastewater in a country like Pakistan is 4,369 x 10 6 m 3 per year, which includes 3,060 x 10 6 m 3 (70%) per year from municipal and 1,309 x 10 6 m 3 (30%) per year from industrial use [9][10]. WW generation through industrial sub-sectors is more than 80% of total industrial WW. Total estimated amount of WW applying directly to agriculture is 876 x 10 6 m 3 (27%) per year [11]. As per estimation, directly irrigating an area with WW is about 32,500 ha [12]. About 2,000 million gallons of WW is being discharged into natural drains every day [9].
Due to urbanization, the graph of environmental problems such as water supply, WW generation, collection, its treatment, and disposal in urban areas have been raised. The untreated WW usually contains organic contaminants such as pesticides, oil, and some inorganic pollutants like metals, ions, nitrates, sulphates, phosphate, arsenic cadmium, mercury, lead, etc. [13]. Some microorganisms like fungi, bacteria, and viruses, etc., are often found in sewage water in an appreciable amount, which may cause a threat to community health [14]. A large quantity of sewage and other effluents released from urban areas and use of this WW in agricultural and other purposes depends on its contamination level. A small portion of the generated WW goes to limited treatment before entering into rivers or surface water bodies [15]. Due to this addition of WW, the surface water quality is going to decrease and the pollution is going to increase day by day [9,[16][17]. Untreated WW, which contains sludge and other commercial effluents, may flow toward the rivers and finally find its way to irrigation canals. Moreover, it has a potential to cause bacteriological diseases such as polio, dysentery, hepatitis, typhoid, paratyphoid, and bacterial infections [9,[18][19][20]. Hence disposal of WW without treatment is a serious environmental concern [21], and to get rid of these harms it is necessary to treat the WW before its disposal [22][23].
WW treatment cost can be reduced by applying those beneficial operations, which can provide its proper reuse options, e.g., for agriculture or irrigation, rangeland, and forest. Biological treatment or trickling filters are a cost-effective method for WW treatment. In this process WW is to pass through suitable media. When it comes to contact with a microbial layer generated on the surface of media where treatment is done with degradation of organic matters by the microorganisms, in fact microbes oxidize pollutants and reduce the organic and inorganic contaminants. The major advantages of using bio filtration are ease of operation and maintenance, flexibility against load variation, low construction cost, simple design, energy savings, it is inexpensive, and it requires only a small area for installation [24].
WW treatment is a costly option for underdeveloped and developing countries. Less than 50% of the globally generated WW is treated [8][9]. The conditions are more severe in developing countries like Pakistan, India, and Bangladesh. Only 8% of domestic and industrial WW receives treatment to only the primary level [9,20,[25][26]. Therefore, the present study addresses a TF-based WW treatment system that utilizes agricultural waste (i.e., cotton sticks) as biofilm support medium. As cotton sticks are abundantly available in many developing countries, it will result in low-cost WW treatment. Moreover, the treated water will be available for agricultural applications.
Material and Methods
In the present study, a trickling filter as secondary WW treatment system was selected for sewage treatment. WW treatment by trickling filter is a purely biological process and can be used for municipal and commercial WW treatment. It is simple in design and requires low cost and a small area for installation. Moreover, it enables low energy requirements and less repair and maintenance [25,[27][28].
Experimental Setup
Cylindrical shaped reactor body was made up of 22 gauge stainless steel. Its diameter was 30 inches (76.2 cm) and length from top to end was 60 inches (152.4 cm). Agricultural bio waste material (cotton sticks) used as filter media in developed TF system for microbial growth. Filter media (cotton sticks) was placed vertically in TF system with 51 inches (129.5 cm) in height and diameter varies from 0.5 to 1.0 inches. A distributor was installed at the top of reactor to spread WW uniformly over filter media. Flow rates were changed with the help of control valves. A drainage layer with 6 inches depth was constructed at the bottom of reactor for ventilation and to flow WW out from the reactor tank for final sedimentation. Normal ventilation also caused by convection currents due to temperature difference between WW and atmospheric air [25,29]. A settling tank was also provided for collecting and settling WW. A schematic diagram of a developed TF system using cotton sticks as filter media is shown in Fig. 1.
All experiments were conducted on real WW collected at a disposal station in Bahauddin Zakariya University, Multan. The developed TF system was operated at four different hydraulic loading rates, i.e., 1.7, 2.6, 3.8, and 4.6 m 3 /hr. The WW collection tank was provided at a fixed gravity head in order to avoid fluctuations and to provide constant and continuous flow of WW to the system. The performance of developed TF was checked for the above-given hydraulic flow rates by characterizing the influent and effluent samples.
Experimental Procedure
During the study, all the samples for influent and effluent were taken at regular intervals. BOD 5 (once a week) was measured using a five-day BOD test, i.e., 5210B standard method of examination of water and wastewater [30]. COD (five days a week) was measured by standard methods for examination of water and wastewater [30]. TSS, turbidity, and color (five days a week) were measured by using Spectroquant Multi in mg/l. TDS (five days a week) was measured using an Eco Tester TDS low in mg/l [25].
Development of Biofilm
Soon after placing of cotton sticks in reactor tank as filter media, the WW was trickled over the filter media for development of biofilm. Tricking filter was operated for 15 to 30 days for development of biofilm as a startup period [25]. To achieve good quality of treated WW a healthy and active growth of biofilm layer is necessary. The present research was investigated about performance of a developed TF system at different hydraulic flow rates using cotton sticks as filter media. Therefore, for every run of the TF system for a specific hydraulic flow rate the WW distribution over filter media was kept constant for the development of biofilm [25,[31][32]. The potential of cotton sticks in terms of biofilm development is shown in Fig. 2.
Results and Discussion
The collected WW samples were analyzed for various WW treatment parameters in order to establish the baseline data. Characterization of raw WW is given in Table 1. Average values achieved by the characterization of WW indicate that it is medium strength sewage [27]. It is necessary for biological WW treatment that BOD 5 /COD must be > 0.60. However, if it is ranging from 0.3 to 0.6 then additional seeding will be required for proper biological treatment because the treatment process will be slow and microbes will take more time for degradation of contaminants. For BOD 5 /COD<0.30, biological degradation of WW contaminants will not proceed due to refractive properties and toxicity of the generated WW. Moreover, it prohibits metabolic activity and microbial growth [25,32]. In the present study, the influent ratio of BOD 5 and COD was more than 0.80, which indicates that WW doesn't require any pre-treatment or acclimate biomass [33].
BOD and COD
The influent and effluent BOD concentration ranged from 156-278 mg/l and 38-80 mg/l, whereas in the case of COD it ranged from 139-342 mg/l and 36-118 mg/l, respectively. It is important to mention that WW quality standards for agricultural reuse are 80 mg/l for BOD and 150 mg/l for COD [34]. During the study we observed that BOD and COD removal efficiencies were varied due to ambient air temperature fluctuation during operational days [25]. The TF system showed maximum treatment efficiency of 72-77% in terms of BOD and 73-79% in terms of COD at flow rate of 1.7 m 3 /hr. The BOD and COD removal efficiencies remained at 62-70% when the flow rate was 2.6 to 3.8 m 3 /hr, as shown in Figs 3-4.
Trickling filter system performance was investigated at four different hydraulic loading rates, namely 1.7, 2.6, 3.8, and 4.6 m 3 /hr, and using these flow rates, system efficiency received 72-77%, 71-74%, 70-71%, and 69-71% for BOD, respectively, as shown in Fig. 3. Similarly, for COD the TF system efficiency was 73-79%, 68-76%, 65-74%, and 66-72% for flow rates of 1.7, 2.6, 3.8, and 4.6 m 3 /hr, respectively, as shown in Fig. 4. During the study, overall BOD and COD removal efficiencies ranged from 69 to 77% and 65 to 79%, respectively. The highest decrease in BOD and COD values was received up to 77% and 79%, respectively. On the other hand, average BOD and COD removal efficiencies were recorded as 75, 73, 71, and 70%, and 76, 72, 70, and 68% at flow rates of 1.7, 2.6, 3.8, and 4.6 m 3 /hr, respectively. The results showed that as flow rate increased, no significant but a slight decrease in the TF system efficiency was recorded. A study was conducted on trickling filter using pebble gravel as filter media for four different flow rates of 500, 600, 700, and 800 L/d, and the results confirmed that there was no big difference in removal efficiency of COD [29]. A hybrid system was designed for domestic WW treatment and the results from the study showed that as flow rate increased, removal efficiency of the system decreased [35]. The maximum BOD and COD removal efficiencies received during hydraulic flow rate of 1.7 m 3 /hr because of high biological degradation of organic contaminants due to the higher retention time [36]. However, the slight decrease in efficiencies of BOD and COD with increasing flow rates (1.7 to 4.6 m 3 /hr) was due to higher hydraulic loading rates that may reduce the residence time of WW in the TF system, which reduces the contact between liquid and biofilm (Figs 3-4) [36]. It has also been investigated that as flow rate decreases, removal efficiency increases [33,[36][37][38][39]. Another study was conducted and results showed that by increasing the hydraulic flow rate the BOD and COD removal efficiencies in the trickling filter remain stable [37].
Higher BOD and COD removal efficiencies were due to higher oxygen availability for maintaining the aerobic zone in the outer portion of the slime layer, which causes the organic substrates to degrade. Low BOD and COD removal efficiencies were due to low oxygen availability, which resulted in increased slime layer thickness and caused anaerobic zone maintenance in the outer portion of this slime layer. The anaerobic zone caused a decrease in the full degradation of organic substrates before being discharged from the trickling filter system [25]. Observed variations in BOD and COD removal efficiencies can also be explained due to the accumulation of biomass with effluent flow and caused a decrease in TF system efficiency [25,31,33,[40][41][42][43]. High efficiencies of a trickling filter system in terms of BOD and COD removal can be explained by biofilter activity known to be efficient for nitrification [25,[44][45]. In addition, cotton sticks proved to be a sustainable filter media with no notable degradation during the study period. No significant seasonal changes were observed in TF system efficiency due to the long-lasting life of cotton sticks.
During the study period, BOD loading rate varied from 6 to 21 kg BOD/m 3 /d, and that of COD loading varied from 5 to 26 kg COD/m 3 /d. It was observed that as the BOD and COD loading rates increased and that BOD and COD removal efficiencies remained within the limits of 70-80% as shown in Figs 3 and 4. Most of the studies investigated that with increasing loading rates system efficiency decreases. This occurred due to the varying loadings in the influent can cause an increase in average effluent concentration because of the decrease in retention time of WW to contact with slime layer to oxidize the organic matters as well as the non-uniform development of biofilm in the internal structure [33,42,[46][47][48][49].
TSS and TDS
Solids residue determination is a very important parameter in WW treatment, which indicates the physical state of principal constituents. TSS and TDS removal performance of the TF system using cotton sticks as biogenic support media was investigated at different flow rates of 2.6, 3.8, 1.7, and 4.6 m 3 /hr. During the study we observed that TSS removal in WW is related to COD reduction. As TSS was reduced, the COD concentration also decreased in effluent. In untreated WW (influent), the value of TSS ranged from 69 to 107 mg/l, and after the treatment of WW (effluent) the concentration ranged from 39 to 60 mg/l. Average TSS removal efficiency was achieved 47, 46, 48, and 44 % at flow rates of 2.6, 3.8, 1.7, and 4.6 m 3 /hr, respectively, as shown in Fig. 5. Overall the trickling filter system removed TSS up to 39 mg/l with removal efficiency of 38-56%. Variation in treatment efficiency was recorded due to the accumulation of slough-off material or degradation of solids from filter media during the operation of the trickling filter system [25].
TDS is also an important parameter in WW treatment, which indicates the physical state of principal constituents. During the study, influent TDS values ranged from 560 to 720 mg/l and effluent was 380 to 530 mg/l. The trickling filter system reduced TDS value up to 380 mg/l. The trickling filter system showed average removal efficiency of 28,29,32, and 25% at flow rates of 2.6, 3.8, 1.7, and 4.6 m 3 /hr, respectively (Fig. 6). The decrease in efficiencies of TSS and TDS with increasing flow rate may be due to the higher hydraulic loading rate, which may reduce the residence time of WW in a trickling filter, which reduces the contact between liquid and biofilm [36]. During the operation of the trickling filter system the removal efficiency of TDS was in the range of 20 to 35%. The decrease in TDS value was recorded due to the conversion NO 3 to N 2 (diatomic Nitrogen), and also was responsible for electrical conductivity (EC) in WW. Many of the other studies concluded that dissolved solids, suspended solids, and COD are related to electrical conductivity levels in WW. The higher value of solids shows more EC [25,[48][49][50][51][52].
Turbidity and Color
Turbidity is another important parameter in WW treatment, which indicates the growth of pathogens responsible for waterborne diseases [53][54]. Turbidity in WW may be due to presence of particulate and organic dissolved matters. The TF system was operated at four different flow rates and tested for removal of turbidity. Influent values of turbidity ranged from 50 to 99 FAU, and after the treatment effluent values ranged from 31 to 56 FAU as shown in Fig. 7. Average removal efficiency received 44,44,47, and 42% at flow rates of 2.6, 3.8, 1.7, and 4.6 m 3 /hr, respectively. During the operation of the trickling filter system, overall removal efficiency ranged from 32 to 57%. This decrease in turbidity was recorded due to degradation of organic compounds present in WW by microorganisms attached to the filter media [25,50,53,[55][56]. During the study we also investigated how turbidity removal is related to COD reduction. As turbidity in WW decreases, the COD value also is reduced and vice versa [25,51].
Color removal is another important parameter in WW treatment because it contains visible contaminants. It is necessary to remove color from WW due to its toxicity, but also due to aesthetic impacts on receiving bodies. The trickling filter system was operated at four different flow rates for color removal. Concentration of color in influent ranged from 450 to 686 pt/co and after the treatment its concentration ranged from 270 to 481 pt/co. Average color removal efficiency was 33, 32, 34, and 30% at flow rates of 2.6, 3.8, 1.7, and 4.6 m 3 /hr, respectively, as shown in Fig. 8. During the operation of the trickling filter system maximum color removal efficiency was up to 42%. Color removal in WW was due to reduction in dissolved solids, suspended solids, and turbidity. Many researchers explained that color in WW occurs due to the presence of dissolved minerals, organics, and chemicals. Reduction in color occurred due to adsorption capability of filter media [25,33,48].
Conclusions
In the present study, the TF system using cotton sticks as support material was successfully tested for municipal WW treatment for under-developed and/or developing countries, particularly for Asia and Africa. The TF system using cotton sticks as filter media proved that municipal WW can be handled in an environmentally friendly and cost-effective manner. During the study results confirmed that the TF system successfully removed contaminants (including BOD, COD, TSS, TDS, turbidity, and color) which lies under PEPA NEQS for re-use in agriculture. The TF system was tested at different flow rates of 1.7, 2.6, 3.8, and 4.6 m 3 /hr. At a low flow rate 1.7 m 3 /hr the TF system using cotton sticks as support media showed about 78% of BOD and 80% COD removal efficiency. By increasing the flow rate about 2.70 times, the system efficiency was up to 70%. Additionally, the TF system using cotton sticks as support media removed TSS 38 to 56%, TDS 20 to 36%, turbidity 31 to 56%, and color up to 42%. | 2017-10-19T16:54:21.656Z | 2017-09-28T00:00:00.000 | {
"year": 2017,
"sha1": "020b454985268f6349b20b5cfcba6edf2c44e611",
"oa_license": null,
"oa_url": "http://www.pjoes.com/pdf-69443-24120?filename=Performance%20Evaluation%20of.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "020b454985268f6349b20b5cfcba6edf2c44e611",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
116463564 | pes2o/s2orc | v3-fos-license | Fiber-wireless for smart grid: A survey
Smart grid allows two-way communication between power utility companies and their customers while having the ability to sense along the transmission lines. However, the downside is such, when the smart devices are transmitting data simultaneously, it results in network congestion. Fiber wireless (FiWi) network is one of the best congestion solutions for smart grid up to date. In this paper, a survey of current literature on FiWi for smart grid will be reviewed and a testbed to test the protocols and algorithms for FiWi in smart grid will be proposed. The results of number of packets received and delay vs packet transmitted obtained via the testbed are compared with the results obtained via simulation and they show that they are in line with each other, validating the accuracy of the testbed.
Introduction
An interconnected network that is used to deliver electricity from suppliers to consumers is what is called as "electrical grid". It works in a way where electrical power is produced at the generating stations to be carried to demand centers via high-voltage transmission line and further to individual customers via distribution lines.
Despite of being an engineering marvel, to date electrical grid is being stretched to its capacity. This is due to the population growth and more modern electrical appliances are being added in every household, such as high-definition televisions, laptops and wireless telephones. These modern appliances are more sensitive to variations in electric voltage causing the entire electric grid to be overused and fragile.
Therefore, an improved electricity supply chain that is called as "Smart Grid" is being introduced to maximize the throughput of the system while reducing the energy consumption. Figure 1 shows the differences between conventional electrical grid and Smart Grid where additional features are introduced in Smart Grid such as monitoring, analysis, control, and communication capabilities. Devices located along the power lines and on premises are able to interact with each other, allowing for two-way communication between utility and customers. Hence, Smart Grid is able to respond digitally to the ever-changing electric demand.
The term used to denote an automated two-way communication between a smart meter and a utility data center is called as Advanced Metering Infrastructure (AMI). AMI uses bidirectional communication to provide energy management data such as consumption data and outage reports as well as control information data such as alerts and equipment settings. AMI comprises of four tiers; home network, smart meter, concentration point and utility data centre. In home network, Home Area Network connects smart appliances to a smart meter for data collection to measure real time energy consumption. The collected data from smart meter then traverse to a concentration point such as substation or communication tower as part of smart grid. Afterwards, data flows from concentration point to the metering data management systems (MDMS) located in utility data centre via private network. The MDMS will then process and manage data on energy consumption and fault detection.
However, the downside of Smart Grid is such, when the smart devices are transmitting data simultaneously, it results in network congestion [1]. Challenges occur in ensuring the reliability and timeliness of the data transmitted over these networks. Therefore, advance techniques such as cognitive radio [2] and fiber wirelesss (FiWi) are developed to fully utilize the capability of smart grid wireless networks.
Cognitive radio networks allow unlicensed devices to transmit in unused "spectrum holes" in licensed bands without causing harmful interference to authorized users. It configures the radio for different combinations of protocol, operating frequency, and waveform. However, cognitive radio needs to deal with two primary issues which are hidden primary users [3] and spread spectrum primary users [4], both of which lead a cognitive radio to incorrectly decide that a spectrum block is empty. Hence, it has higher probability to cause the signals to interfere with the licensed primary user.
This does not happen with FiWi as the optical side is able to provide reliable transmission for smart meter and intelligent sensor data, while the wireless side allows flexible access to remote locations and broad coverage. Therefore, this paper discusses in details on the integration of FiWi and smart grid. The remainder of this paper is structured as follows. Section 2 introduces the fundamental of FiWi networks. The state of the art of FiWi networks in smart grid is then reviewed and we highlight selected FiWi testbeds in the literature. We introduce our proposed testbed in the next section and the final section concludes the paper.
Fundamental FiWi Networks
FiWi is an integration between fiber and wireless network that is also known as the endgame for the broadband access network. Fiber and wireless are combined together to achieve high bandwidth as well as mobility in the network.
Typical architecture of FiWi is depicted in Figure 2, where the wired side consists of a basic configuration of Passive Optical Network (PON) as it is the dominant broadband access network emerging up to date. FiWi consists of Optical Line Terminal (OLT) at the central office that is connected to multiple Optical Network Units (ONUs) at the customer's sides via optical fiber and passive optical splitter. The ONUs are further connected to end users wirelessly.
PON for FiWi at the wired side can generally be divided into two types; Wavelength Division Multiplexing (WDM) and Time Division Multiplexing (TDM) PON. WDM PON allows each ONU to operate at a different wavelength to avoid collision. In order to receive the data transmitted in multiple channels, a tunable receiver or a receiver array is required at the OLT. It also requires each ONU to use a fixed transmitter operating at a different wavelength, which would result in an inventory problem. Although the inventory problem can be solved by using tunable transmitters, these devices are costly, making the solution cost-ineffective.
Fig. 2. FiWi architecture
On the other hand with TDM, OLT allocates a time slot or a transmission window for data transmission in ONU. Upon the arrival of its time slot, the ONU will send out its buffered packets at the full transmission rate of the upstream channel. If there are no frames in the buffer to fill the entire time slot, idles are transmitted. TDM is considered as cost effective approach but the arbitration mechanism is more complex compared to WDM.
The two most popular wireless technology used in FiWi are WiFi and WiMax. WiFi that is using IEEE Standard 802.11 offers low bandwidth; 54Mbps for IEEE 802.11a whereas 11Mbps for 802.11b and 54Mbps for 802.11g. The range is also limited, typically up until 100m, which is why it is mainly used for wireless local area networks. In WiFi, central authority known as access point is required to manage the network. It is gaining its popularity due to its flexibility to multihop.
On the other hand for WiMax, IEEE Standard 802.16 is used. WiMax provides high bandwidth up to 75Mbps in a range of 3 to 5km. However, as the distance gets longer, the bandwidth reduces to 20-30Mbps due to the fact that WiMax does not work proficiently for nonline-of-sight communications. WiMax is typically used for metropolitan area networks with base station is required to manage the network. The downside is such, WiMax is more suitable for single hop.
Data is transmitted in FiWi networks via these two techniques; Radio over Fiber (RoF) or Radio and Fiber (R&F). RoF is a technique where radio signals are transmitted over optical fiber to provide communication service. It is an analog communication scheme where the signals are simply converted from electrical to optical and vice versa. One of the advantages of RoF is that only minimal modifications are required at the base stations or access points since Radio Frequency (RF) signals are transmitted to remote antenna as it is. On the other hand for R&F, different media access control (MAC) protocols are used in fiber and wireless link. Although it requires major modifications at base stations or access points, since it uses two different protocols for two different links, it solves the problems caused by insertion of the optical distribution system in wireless networks.
For FiWi in smart grid, different combination of techniques are used at the fiber link, wireless link and for data transmission. The combinations are discussed in the following section.
FiWi for Smart Grid
To date, there are still very limited literature on utilizing FiWi for smart grid. Among the earliest paper on FiWi for smart grid found in the literature is Uber-FiWi published 2011 by Maier et al. [5]. Uber-FiWi combines a big fiber ring network at the wired side to interconnect the distribution management system to either EPON network (for urban area) or WiMax base station (for suburban or rural area). Total power consumption and total cost of various scheduling algorithms are studied using this hybrid network.
In 2013, Zaker et al. [6] has proposed a Fiber-Wireless Sensor Networks (Fi-WSN) gateway design for smart grid. In the design proposed, TDM Ethernet PON (EPON) serves as back-end of the communication network whereas the WSN forms the front-end. Due to the fact that data in smart grid needs to be treated differently as according to their urgency, an algorithm has been proposed in the paper to allow differentiation between high priority and low priority packets.
In 2014, Ghassemi et al. [7] has proposed the usage of RoF networks for smart grid. RF signals are distributed from a control unit called headend (or OLT) to remote antenna units (RAU) (or ONU) so that the complex signal processing functions (for modulation, synchronization, multiplexing, coding, etc.) are centralized. This simplifies RAU and has greatly reduced the system installation and operational cost. This system can be used for both TDM and WDM PON as well as for both WiFi and WiMax technology.
As can be observed, there are still limited researches done in studying FiWi for smart grid as FiWi is still considered as a new technology. Due to this reason, a reconfigurable FiWi testbed is needed to study the best protocols, algorithms and topology for smart grid.
FiWi Testbed
Numerous FiWi testbed are developed for various reason ranging from evaluations, comparison, review and enhancement. Digitized RoF FiWi testbed has been developed in [8] where the equipment used are modulated vertical cavity laser, wireless signal generator, digitized sampling oscilloscope, photodetector, digital to analog converter, arbitrary waveform generator and vector signal analyser.
Pang et. al [9] proposed a FiWi testbed, which transmits on W-Band, between 75-110 GHz frequency band. The testbed proposed uses hardware such as 16 QAM optical baseband transmitter, 100GHz photodetector, Erbium-doped fiber amplifier, W-Band horn antenna, low-noise amplifier, W-band balance mixer, local oscillator, Rohde and Schwarz signal synthesizer, analog to digital converter and digital signal processing-based receiver.
Both digitized FiWi testbed and W-band testbed are purely hardware-based testbed. With purely hardware-based testbed, it is not easy to reconfigure the testbed to study on the protocols and algorithm, making it less suitable to be used in FiWi for smart grid study.
There are also implementations of network virtualization in the FiWi. Network virtualization is a combination of hardware and software. Dai et. al [10] applies network virtualization to hide the differences between fiber network to the wireless network. In terms of software, the testbed contains virtual resources that provides bandwidth, computing capacity, storage and virtual networks.
Meng et. al [11] proposed a Modified Weighted Round Robin (MWRR) algorithm based on the model of FiWi network virtualization and this testbed uses MATLAB as their programming platform.
However, since network virtualization is partially software-based testbed, it is not able to capture some of the non-linearization effects.
Lim et. al [12] proposed a testbed incorporating liquid-crystal-on-silicon (LCoS)-based programmable optical processor (POP) in the remote node to study the performance of WDM-based 60 GHz millimetre FiWi link. This testbed uses programmable tools where the POP is flexible, robust and enables simple future system upgrade. POP used in this testbed is to de-multiplex the interleaved channel before distributing to base station with error free transmission. Although POP is a programmable module, however its application only limited at the remote node and not the overall FiWi network architecture.
A fully fast reprogrammable FiWi testbed is needed to study the most suitable protocol and algorithm in smart grid network. The next section discusses on a proposed FiWi tesbed that uses software defined radio (SDR) to make it fast reconfigurable, simple and scalable. Figure 3 shows the fast reconfigurable FiWi testbed proposed where the major components used are SDRs for OLT, ONU and end users. We chose SDR as the main component of our testbed as it has the ability to support multiple functionality simply by modifying the software without the need of changing the hardware, making the study of protocols and algorithms to be simpler, faster and cost savvy.
Fig. 3. FiWi Testbed Proposed
The SDR chosen for our testbed is Universal Software Radio Peripheral (USRP) by National Instrument (NI). A wide range of USRPs is available in the market, but we have chosen USRP-2922 particularly as it meets our testbed requirements which are to transmit at 2.5GHz frequency band and to have simultaneous transmit/receive feature.
The USRP that acts as OLT is first connected to electrical/optical converter before it goes to the 20km single mode fiber spool, then to a passive optical splitter to split the data into multiple ONUs. From the splitter, the data needs to go through an optical/electrical converter before it is connected to USRP that acts as the ONUs. From ONU's USRP to end users' USRP, antenna are needed as the transmission is done wirelessly.
Every SDR is further connected to a computer equipped with a Graphical User Interface (GUI) as shown in Figure 4 to monitor traffics coming in and out. The study of protocols and algorithms require the program to be modified via Labview at affected USRP without having to modify the hardware and the results can then be observed at the GUIs. Among the parameters that can be obtained are throughput, delay and jitter. FiWi testbed. It shows that up until 11,000 packets are received for both methods via simulation and via testbed, validating the results achieved using our FiWi testbed.
Conclusion
In this paper, we have introduced the fundamental of smart grid and FiWi networks and we have reviewed the state of art of FiWi networks for smart grid as well as selected FiWi testbed. We have also introduced our proposed fast reconfigurable FiWi testbed with the number of packets received and delay results, which show that the testbed is validated as the results obtained via simulation and testbed are in line with each other. | 2019-04-16T13:26:47.767Z | 2017-11-22T00:00:00.000 | {
"year": 2017,
"sha1": "c4e1abaec75b639cfbdeb464407a27a49ed0cff0",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/31/epjconf_incape2017_01021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82bf25982f8db07316c03b40c221a85aebba0676",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
232282696 | pes2o/s2orc | v3-fos-license | The dynamics of inflammatory markers in coronavirus disease-2019 (COVID-19) patients: A systematic review and meta-analysis
Background Coronavirus disease-2019 (COVID-19) is a global pandemic and high mortality rate among severe or critical COVID-19 is linked with SARS-CoV-2 infection-induced hyperinflammation of the innate and adaptive immune systems and the resulting cytokine storm. This paper attempts to conduct a systematic review and meta-analysis of published articles, to evaluate the association of inflammatory parameters with the severity and mortality in COVID-19 patients. Methods A comprehensive systematic literature search of medical electronic databases including Pubmed/Medline, Europe PMC, and Google Scholar was performed for relevant data published from January 1, 2020 to June 26, 2020. Observational studies reporting clear extractable data on inflammatory parameters in laboratory-confirmed COVID-19 patients were included. Screening of articles, data extraction and quality assessment were carried out by two authors independently. Standardized mean difference (SMD)/mean difference (MD/WMD) and 95% confidence intervals (CIs) were calculated using random or fixed-effects models. Results A total of 83 studies were included in the meta-analysis. Of which, 54 studies were grouped by severity, 25 studies were grouped by mortality, and 04 studies were grouped by both severity and mortality. Random effect model results demonstrated that patients with severe COVID-19 group had significantly higher levels of C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), procalcitonin (PCT), interleukin-6 (IL-6), interleukin-10 (IL-10), interleukin-2R (IL-2R), serum amyloid A (SAA) and neutrophil-to-lymphocyte ratio (NLR) compared to those in the non-severe group. Similarly, the fixed-effect model revealed significant higher ferritin level in the severe group when compared with the non-severe group. Furthermore, the random effect model results demonstrated that the non-survivor group had significantly higher levels of CRP, PCT, IL-6, ferritin, and NLR when compared with the survivor group. Conclusion In conclusion, the measurement of these inflammatory parameters could help the physicians to rapidly identify severe COVID-19 patients, hence facilitating the early initiation of effective treatment. Prospero registration number CRD42020193169.
Introduction
Coronavirus disease 2019 is caused by the zoonotic agent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This virus emerged in the human population in the late December 2019 in Wuhan, Hubei province, central China and has since spread across the globe. 1,2 Owing to the rapid increase in the number of COVID-19 cases and uncontrolled worldwide spread, it was declared by the WHO a Public Health Emergency of International Concern on January 30, 2020, and furthered labeled as a pandemic on March 11, 2020. 3,4 As of September 28, 2020, COVID-19 pandemic had over 32.7 million confirmed cases with 991000 deaths. 4 The clinical presentation of COVID-19 ranges from mild to critically ill. While most COVID-19 patients have a mild influenza-like illness and may be asymptomatic, a minority of patients are experiencing severe pneumonia, acute respiratory distress syndrome (ARDS), multiple organ failure (MOF), and even death. 5 As soon as patients progress to the severity or critical stage, the risk for poor outcomes increases significantly. 6 It is estimated that around 10-15% of mild COVID-19 patients advance to severe, and 15-20% of severe cases progress to become critical, with many of the individuals in the critical category needing treatment in intensive care units (ICU). 7 As the number of COVID-19 cases increasing globally and treatment in intensive care units (ICU) has become a major challenge, early identification of severe forms of COVID-19 is crucial for the timely triaging of patients. 8 Severe or critical COVID-19 is strongly linked with mortality 9 and the high mortality rate amongst these cases is linked with SARS-CoV-2 infection-induced hyperinflammation of the innate and adaptive immune systems and the resulting cytokine storm, a cytokine release syndrome (CRS)-like syndrome in severe/critical COVID-19 cases. [10][11][12][13] Studies have reported that the inflammatory parameters are closely linked to the COVID-19 severity and mortality. [14][15][16][17] In addition, two recent meta-analyses have also shown an association of inflammatory parameters with the COVID-19 severity. 18,19 However, with an increase in the number of studies now published, it is important to carry out more comprehensive reviews and analyses of inflammatory parameters linked to COVID-19 severity. We, therefore, conducted a comprehensive systematic review and meta-analysis of published articles, from January 1, 2020 to June 26, 2020, to evaluate the association of inflammatory parameters with the severity and mortality in COVID-19 patients.
Methods
This systematic review and meta-analysis has been conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 20 and was registered with PROSPERO-The International Prospective Register of Systematic Reviews (Registration No. CRD42020193169). 21
Search strategy
A comprehensive systematic literature search of medical electronic databases including PubMed/Medline, Europe PMC, and Google Scholar was performed for relevant data published from January 1, 2020 to June 26, 2020. Pubmed/Medline and Europe PMC were searched using the following search terms: ("COVID-19" OR "2019-nCOV" OR "SARS-COV-2" OR "severe acute respiratory syndrome coronavirus 2" OR "novel coronavirus disease" OR "COVID-19 patients" OR "novel coronavirus 2019" OR "coronavirus disease-2019") AND ("erythrocyte sedimentation rate" OR "C-reactive protein" OR "ferritin" OR "procalcitonin" OR "interleukin-6" OR "interleukin-10" OR "interleukin-2R" OR "tumor necrosis factor-α" OR "serum amyloid A" OR "neutrophil-to-lymphocyte ratio" OR "inflammatory markers" OR "inflammatory parameters") whereas Google Scholar was searched using the keywords ("COVID-19" OR "2019-nCOV" OR "SARS-COV-2" OR "novel coronavirus disease" OR "COVID-19 patients" OR "novel coronavirus 2019" OR "coronavirus disease-2019") AND ("inflammatory markers" OR "inflammatory parameters") owing to the limitation of 256 characters in the search string. Two authors (RKM and SP) independently screened the results from the initial search by titles and abstracts for relevance and the full texts were reviewed for the eligibility criteria. To identify the eligible studies, the reference list of previous studies and systemic reviews were also searched and identified records were screened for the inclusion criteria specified for the current systemic review and meta-analysis. Any ambiguity occurred while the selection of the study was resolved by mutual discussion and consensus.
Inclusion and exclusion criteria
The inclusion criteria were as follows: (a) observational studies (cohort studies, case-control studies, cross-sectional studies, and case series studies) reporting clear extractable data on inflammatory parameters in laboratory-confirmed COVID-19 patients, (b) compared the inflammatory parameters between severe and non-severe COVID-19 patients or between survivors or non-survivors. The exclusion criteria were as follows: (a) review articles, non-research letters, editorials, commentaries, case reports, animal studies, original research with samples below 10, abstract from meeting proceedings, non-English language articles, (b) studies that were conducted particularly in children or pregnant women, (c) unclear reporting of levels of inflammatory parameters, (d) studies which do not provide a full-text version, (e) articles which were not peer-reviewed or accepted for publication, (f) laboratory information not presented as mean (standard deviation, SD) or median (interquartile range, IQR or range). In addition, when two or more studies were conducted at the same center/hospital recruiting patients during the same or overlapping periods, we selected the one with a larger sample size unless the other studies presented relevant information not included in the study having a larger sample size. In this study, mild and moderate COVID-19 patients were included in the nonsevere group whereas severe and critical COVID-19 patients were included in the severe group.
Data extraction
Data were extracted independently by two reviewers (RKM and SP). A third reviewer (VR) checked the extracted data to ensure that there were no mistakes or duplicated information. The following information of each study was extracted from included articles: first author, country, year of publication, type of publication, hospital, date of data collection, gender, age, the total number of COVID-19 patients, number of severe/ non-severe patients, or number of survivors/non-survivors and inflammatory parameters measured.
Quality assessment
The quality of included studies was assessed using the Newcastle-Ottawa Scale (NOS) 22 which is easy to use with its star rating system. Each of the included studies was judged on three broad perspectives: the selection of study groups (0-4 stars), the comparability of the groups (0-2 stars), and the ascertainment of the outcome of interest (0-3 stars), with a maximum of nine stars representing the highest methodological quality. The quality assessment was carried out independently by two authors (RKM and SP) for each original study included. Any disagreements were discussed between the two authors, and a third author (SS) was involved, if necessary, in reaching a final judgment.
Statistical analysis
Mean and standard deviation of inflammatory markers were extrapolated from sample size, median and interquartile range (IQR) or range according to Luo et al. 23 and Wan et al. 24 when the results of the included studies were present in median and interquartile range (IQR) or range. A pooled mean difference (MD/WMD) with 95% CI was used to assess the difference between inflammatory markers measured in COVID-19 patients with and without severe disease or COVID-19 patients who survived and those who did not survive in studies with the same clinical units and measures; otherwise, the standardized mean difference (SMD) was used. Statistical heterogeneity among studies was assessed using Cochran's Q test and I 2 statistics. A Cochran's Q value of <0.10 indicates substantial heterogeneity between studies whereas I 2 statistics were interpreted as 25%, 50%, and 75% for low, moderate, and substantial heterogeneity, respectively. If heterogeneity existed, the random effect model was used; otherwise, the fixed effect model was used. Funnel plots were designed to assess the publication bias and the plot's symmetry was assessed by Egger's linear regression test (a p-value <0.1 indicated significant bias). If publication bias was confirmed, Duval and Tweedie's nonparametric trim-and-fill method was used to adjust potential publication bias. 25 A leave-one-out sensitivity analysis was performed by removing one study at a time through influence analysis to assess the stability of results. The results of individual studies were pooled using Review Manager Version 5.4. All other statistical analyses were done using STATA (version 16; Stata Corporation, College Station, TX). A p-value <0.05 was considered statistically significant except for Egger's test and test of heterogeneity i.e. Cochran's Q test.
Outcome of the database search
A total of 5612 articles were retrieved through the database search and from the reference lists of published articles, of which 4261 remained after the removal of duplicates. Following the screening of title/abstracts, 263 articles were selected for full-text assessment. 83 studies were finally selected for data extraction and meta-analysis after excluding ineligible studies for the following reasons: studies not stratified by severity or mortality (n = 82), not relevant for inclusion (n = 10), data not extractable/unclear reporting of inflammatory parameters/data not clearly presented (n = 18), full text not available (n = 2); overlap of samples between the groups (n = 1), inflammatory parameters not reported (n = 6), diagnosis not clear (n = 3), not laboratory diagnosed COVID-19 (n = 12), laboratory information not presented as mean (standard deviation, SD) or median (interquartile range, IQR or range) (n = 23) and hospital and study period overlap with other included studies (n = 23). The flow diagram of the number of studies screened and included in the meta-analysis is shown in Fig. 1.
Metaanalysis of inflammatory markers in patients with COVID-19 stratified by severity
Information on C-reactive protein (CRP) was available in 44 studies with 2623 severe and 5275 non-severe COVID-19 patients. The analysis of the random effect model showed that compared with the non-severe group, the severe group had a significantly higher CRP [SMD = 1.14, 95% CI: 0.97-1.32; p < 0.00001] with a substantial heterogeneity [I 2 = 90%]. 17 studies analyzed erythrocyte sedimentation rate (ESR) involving 1075 severe and 2362 non-severe COVID-19 patients. The value of ESR was significantly higher in the severe group when compared with the non-severe group [MD = 12.08, 95% CI: 8.04-16.11; p < 0.00001] with a high heterogeneity [I 2 = 75%] in a random effect model. 30 studies with 2217 severe and 3682 non-severe COVID-19 patients were included for the meta-analysis of procalcitonin (PCT). The random effect model demonstrated that the severe group had significantly increased PCT than the non-severe group [SMD = 0.88, 95% CI: 0.68-1.08; p < 0.00001] with a substantial heterogeneity [I 2 = 90%]. A total of 18 studies with 1564 severe and 2054 non-severe COVID-19 patients were included in this metaanalysis for interleukin-6 (IL-6). In a random effect model, the value of IL-6 was significantly higher in the severe group when compared with the non-severe group [MD = 16.94, 95% CI: 12.72-21.16; p < 0.00001] with a substantial heterogeneity [I 2 = 96%]. In total, 8 studies with 864 severe and 762 non-severe COVID-19 patients were taken in the metaanalysis for interleukin-10 (IL-10). The estimated pooled MD indicated that the severe group had a significantly higher level of IL-10 than the non-severe group [MD = 2.03, 95% CI: 1.36-2.70; p < 0.00001] with a substantial heterogeneity [I 2 = 82%] in a random effect model. Information on interleukin-2R (IL-2R) was available in two studies, including 345 severe and 186 non-severe COVID-19 patients. In a random effect model, the value of IL-2R was significantly higher in the severe group when compared with the nonsevere group [MD = 238.26, 95% CI: 31.90-444.62; p = 0.02] with high heterogeneity [I 2 = 84%]. For tumor necrosis factor-α (TNF-α), 7 studies with 758 severe and 682 non-severe COVID-19 were included in the meta-analysis and the random effect model analysis showed that compared with the non-severe group, the severe group had higher TNFα, but the difference was not significant [MD = 0.05, 95% CI: -0.57-0.68; We obtained information about ferritin from 9 studies including 835 severe and 774 non-severe COVID-19 patients. The estimated pooled standardized mean difference indicated that the severe group had significantly higher ferritin than the non-severe group [SMD = 0.71, 95% CI: 0.60-0.81; p < 0.00001] without evident heterogeneity [I 2 = 5%] in a fixed-effect model. Nine studies with 482 severe and 807 non-severe COVID-19 patients were included in the metaanalysis for serum amyloid A (SAA). The estimated pooled standardized mean difference revealed a significant increase in SAA in the severe group compared with the non-severe group [SMD = 1.16, 95% CI: 0.64-1.68; p < 0.0001] with a substantial heterogeneity [I 2 = 93%] in a random effect model. The meta-analysis included 819 severe and 1700 non-severe COVID-19 patients from 12 studies for neutrophil-tolymphocyte ratio (NLR) and a random effect model analysis revealed significant higher NLR in severe patients when compared with nonsevere COVID-19 patients [MD = 3.27, 95% CI: 1.99-4.55; p < 0.00001] with a substantial heterogeneity [I 2 = 90%] ( Table 2; Supplement 1). (Table 3; Supplement 2). IL-10, IL-2R, and TNF-α were not included in the meta-analysis since the information about these parameters were available in only one study. In addition, information about SAA was also available in only one study and hence this was also not included in the meta-analysis.
Subgroup analysis
In a subgroup analysis by sample size, we did not find any significant differences in the levels of CRP, ESR, PCT, ferritin, SAA, NLR, IL-10, and TNF-α between the sample size ≥100 subgroup and sample size <100 subgroup. However, IL-6 was significantly increased in sample size <100 subgroup compared with the sample size ≥100 subgroup. In both the subgroups, CRP, ESR, PCT, ferritin, SAA, NLR, IL-10, IL-6, and TNF-α were associated with the severity of COVID-19 (Supplement 3).
Subgroup analysis based on sample size showed a significant association of CRP, ferritin, and NLR with mortality of COVID-19 patients in both the sample size <100 subgroup and sample size ≥100 subgroup, whereas PCT was significantly associated with mortality in sample size ≥100 subgroup only. In neither of the subgroup, ESR was significantly associated with mortality. In addition, we did not find any significant differences in the levels of CRP, ESR, PCT, and ferritin between the sample size ≥100 subgroup and sample size <100 subgroup. However, the sample size ≥100 subgroup had a significantly increased level of NLR when compared with sample size <100 subgroup (Supplement 4).
Publication bias
Funnel plots were constructed for only those parameters which were retrieved from ≥10 studies. Funnel plot analysis showed asymmetrical shape for CRP, ESR, PCT, IL6, and NLR in severity studies (Supplement 5). Regression-based Egger's test showed statistically significant smallstudy effects for CRP (p = 0.0000), ESR (p = 0.0802), PCT (p = 0.0531), IL-6 (p = 0.0000) and NLR (p = 0.0085). Therefore, to adjust the publication bias, the trim and fill method was adopted and after adjustment, the funnel plot looks more symmetric than before. The trim and fill method did not impute any study in CRP, PCT, and NLR whereas 5 and 6 studies were imputed in ESR and IL-6 respectively (Supplement 6). In mortality studies, a funnel plot regarding the CRP showed that the p value of Egger's test was 0.1058, suggesting no stable evidence of publication bias. Regression-based Egger's test showed statistically significant small-study effects for PCT (p = 0.0005) and ferritin (p = 0.0449). The trim and fill method imputed 2 studies in the PCT whereas no study was imputed in ferritin (Supplement 7 and supplement 8).
Sensitivity analysis
Sensitivity analysis indicated that the combined results did not change with the exclusion of any one of the studies in CRP, ESR, PCT, IL-6, IL-10, TNF-α, ferritin, SAA, and NLR between the severe and nonsevere groups (Supplement 9). Similarly, sensitivity analysis revealed that the results were not influenced with the exclusion of any one of the studies in CRP, ESR, PCT, ferritin, and NLR between the survivor and non-survivor groups. However, for IL-6, the pooled effect sizes changed after omitting Chen R et al. 14
Discussion
This systematic review and metaanalysis included 83 studies to investigate the association of inflammatory parameters with the severity and mortality in COVID-19 patients. The findings revealed significantly higher levels of CRP, ESR, PCT, IL-6, IL-10, IL-2R, ferritin, SAA, and NLR in the severe group compared to the non-severe group with COVID-19. However, no significant difference was observed in the level of TNF-α between severe and non-severe groups. Similarly, the levels of CRP, PCT, IL-6, ferritin, and NLR were significantly higher in non-survivors compared with survivors whereas no significant difference was observed for ESR between survivors and non-survivors. C-reactive protein is an acute-phase inflammatory protein produced by the liver and regulated at the transcriptional level by the cytokine IL-6 and IL-1. 105 It is an important index for diagnosing and evaluating severe pulmonary infectious diseases. 106 SARS-CoV-2 shares similar clinical features with Middle East respiratory syndrome coronavirus 107 and in patients with severe Middle East respiratory syndrome coronavirus pneumonia, increasing in C-reactive protein levels is correlated with clinical deterioration. 108 Similarly, in our meta-analysis, elevated CRP was associated with both severity and mortality in COVID-19 patients, which represent more prominent inflammation in severe patients. ESR is a non-specific inflammatory marker, primarily reflecting changes in plasma protein types. 109 In the present meta-analysis, a higher ESR level was associated with the severity of COVID-19. Similarly, the findings of systematic literature search and pooled analysis conducted by Lapić et al. 110 stated that severe COVID-19 cases are associated with prominent elevations of ESR, as compared to non-severe cases. This increased ESR level in severe COVID-19 cases reflects a more profound inflammatory response and expression of acute-phase proteins. 111 Procalcitonin, a peptide precursor of the hormone calcitonin, is normally synthesized and released by thyroid parafollicular C cells and is widely researched as a promising biomarker for the initial investigation of bacterial infection. 112,113 Elevated PCT often occurs in sepsis and septic shock patients. 114 In the present meta-analysis, an increased level of PCT was found to be associated with severity and mortality in COVID-19 patients. Similarly, Lippi et al. 115 in their meta-analysis also demonstrated that increased PCT levels are associated with a 5-fold higher risk of severe COVID-19. During bacterial infection, the production and release of procalcitonin into the circulation from extrathyroidal sources is greatly amplified, which is maintained by increased levels of IL-6, IL-1β, and TNF-α whereas the increased concentration of interferon-γ during viral infection is negatively impacting the synthesis of PCT. 112,116 This is why the level of PCT remains within the normal range in the majority of the patients with non-severe COVID-19 and increased value in severe COVID-19 may indicate secondary bacterial infection. 115 Elevated proinflammatory cytokine or chemokine responses induced immunopathology, described as a cytokine storm, has been involved in the pathogenesis of human coronavirus. 13 It is hypothesized that SARS-CoV-2 first binds to alveolar epithelial cells and then the virus triggers the innate immune system and the adaptive immune system, leading to the release of a substantial number of cytokines, including IL-6, which is a pleiotropic cytokine important in regulating immunological and inflammatory responses. Abnormally increased levels of such cytokines or chemokines can cause tissue damage, resulting in respiratory failure or multiple organ failure. 14,[117][118][119] In addition to its strong proinflammatory function, IL-6 induces various acute-phase proteins, such as CRP, SAA, fibrinogen, antitrypsin, hepcidin, and components of complement to deteriorate inflammatory reactions and activate coagulation pathway with resultant disruption of procoagulant-anticoagulant homeostasis, induction of disseminated intravascular coagulation, and multi-organ failure. 120,121 Among various cytokines and chemokines (IL-2, IL-8, IL-17, GCSF, IP-10, and TNF-α) recognized, IL-6 has been considered as the most significant cytokines, which was found increased in both SARS and MERS, as well as in COVID-19. [122][123][124][125][126] In our metaanalysis, high IL-6 has been linked with both severity and mortality in COVID-19 patients. The importance of identifying this elevated biomarker also lies in the potential use of an antibody against IL-6 such as tocilizumab, which has been reported to effectively improve clinical symptoms and repress the deterioration of severe and critical patients. 127 Though IL-10 is an anti-inflammatory cytokine, a higher level of IL-10 was observed in the severe group compared to the non-severe group. Furthermore, higher IL-10 was also associated with mortality in COVID-19. 17 This increased level of IL-10 may be because of the compensatory anti-inflammatory response. 111 The present metaanalysis revealed association of higher level of IL-2R with COVID-19 severity. Wang et al. 17 reported a higher level of IL-2R in non-survivors compared to survivors. Highly expressed IL-2R initiates autoreactive cytotoxic CD8 + T-cell-mediated autoimmunity. In the meantime, IL-2 promotes the proliferation of natural killer cells that strongly express IL-2R, facilitating the release of cytokines, which further induces the deadly "cytokine storm". 128 Another potential proinflammatory biomarker for COVID-19 is TNF-α, which facilitates the apoptosis of both lung epithelial cells and endothelial cells, leading to vascular leakage, alveolar edema, and hypoxia. 13 It also mediates airway hyper-responsiveness and pathogenesis in influenza and SARS-CoV infection. 129 In our meta-analysis, no significant difference was observed in the level of TNF-α between severe and non-severe COVID-19 groups. The possible reason for this insignificant difference could be due to insignificant difference in the level of TNF-α between severe and non-severe groups in 4 studies 58,63,64,80 and a significant decrease in the level of TNF-α in the severe group compared to the non-severe group in 1 study. 16 However, 2 studies 15,57 revealed a significant increase in TNF-α in the severe group compared to the non-severe group.
Serum ferritin is an acute-phase protein, which can be used as a prognostic marker for tissue damage or acute infections. 130 Patients with COVID-19 in the severe group had a higher level of serum ferritin than those in the non-severe group. Furthermore, in our metaanalysis, a higher serum ferritin level was associated with mortality in COVID-19 patients. Though the pathophysiological background responsible for the association of hyperferritinemia and disease severity in patients with COVID-19 is not clearly grasped, it is suggested that hyperferritinemia in COVID-19 patients is most likely due to the cytokine storm and a secondary hemophagocytic lymphohistiocytosis. 131 Serum amyloid A is a non-specific acute phase protein primarily produced in hepatocytes by cytokines IL-1 β, IL-6, and TNF-α, and can be used as a prognostic marker for tissue injury or acute infections. [132][133][134][135] It can promote inflammatory responses even at very low concentrations by activating chemokines and inducing chemotaxis. 136,137 The level of SAA was found to be positively associated with the degree of pneumonia in SARS. 138 Similarly, compared to the non-severe group, a significantly higher SAA level was observed in the severe group. Serum amyloid A was also found to be associated with mortality in COVID-19 and the increased SAA in non-survivors indicated the progressive immune-mediated damage in dead patients. 14 Studies reported that severe/critical COVID-19 patients had large amounts of IL-1 β, IFNγ, IP-10, MCP-1, MIP-1, TNF-α, and other cytokines present in the system, which boost liver cells to produce SAA. 39,139 Neutrophil-to-lymphocyte ratio (NLR) is the most well established inflammatory marker that reflects systemic inflammatory response and is easily obtainable through routine blood count analysis. 140 Several COVID-19 patients had increased neutrophil counts and decreased lymphocyte counts during the severe phase of COVID-19 infection. 141 Recently, the meta-analysis conducted by Lagunas-Rangel 142 showed that the NLR values were significantly associated with the severity of COVID-19. Similarly, in our meta-analysis, elevated NLR was associated with severity and mortality in COVID-19 patients. This increased NLR reflects the enhanced inflammatory process in severe/critical COVID-19 patients. Therefore, the detection of NLR levels in COVID-19 patients may help in assessing disease severity.
This systematic review and meta-analysis had several limitations that should be addressed. First, we have excluded articles published in foreign languages and the articles in which the data were not presented as mean (standard deviation, SD) or median (interquartile range, IQR, or range), which may have introduced bias in the results. Second, we have converted non-normally distributed data to normally distributed data, which may have biased the results. Third, the majority of included studies were from China, which limits the generalizability of the results. Fourth, most of the included studies were retrospective and observational; therefore, the results obtained must be interpreted with caution. Lastly, substantial heterogeneity exists in almost all meta-analyses.
Conclusion
In conclusion, our systematic review and meta-analysis showed significant increased serum concentrations of CRP, ESR, PCT, IL-6, IL-10, IL-2R, ferritin, SAA, and NLR in severe COVID-19 patients when compared to those with non-severe COVID-19 patients. Similarly, we found significant increased levels of CRP, PCT, IL-6, ferritin, and NLR in non-survivors as compared to survivors. These inflammatory parameters could help the physicians to rapidly identify severe COVID-19 patients, hence facilitating the early initiation of effective treatment. In addition, these inflammatory parameters could be used to predict the transition from mild to severe/critical infection in patients of COVID-19.
Authorship statement
R.K. Mahat contributed to the concept, design, methodology, analysis, interpretation, supervision, writing, reviewing and editing. S. Panda contributed to the methodology, interpretation, writing, reviewing and editing. V. Rathore contributed to the methodology, analysis, interpretation, writing, reviewing and editing. S. Swain contributed to the methodology, supervision, reviewing and editing. L. Yadav contributed to the methodology, reviewing and editing. S.P. Sah contributed to the methodology, reviewing and editing. | 2021-03-20T13:16:32.546Z | 2021-03-20T00:00:00.000 | {
"year": 2021,
"sha1": "06fda50cb3220f215e9a1217003583898fe07a92",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.cegh.2021.100727",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "929273bd1306d8de8bbee0831dc3e77675191128",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
87562204 | pes2o/s2orc | v3-fos-license | Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore
The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU) Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT) method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM) of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB) with coral reef coverage percentage of 39%, Coral Massive (CM) with coral reef coverage percentage of 2,85%, Coral Foliose (CF) with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM) with coral reef coverage percentage of 8,5%. Observation in 10 meters depth resulted in coral reef coverage percentage of 63,33%. 75% of living coral found on this 10 meters depth are dominated by Acropora branching coral, while the rest 25,21% are filled by Acropora tabulate coral and non-Acropora coral in the life form of branching, massive, sub-massive, foliose, and mushroom, where coral Mortality Index (IM) reached 28,5%. The high number of coral reef coverage percentage on Paiton is caused by successful coral transplantation and low activity of society in this location. The domination of large size Acropora branching coral were estimated comes from a few types, showing that coral resulted from transplantation has grown large and form a complex 3 dimension structure that is suitable for the life form of fish and living benthic.
Introduction
Coral reef is a dynamic and integrated ecosystem with mineral construction material that transferred by plants and animals.These kind of elements that caused coral reef to become an ecosystem with high variety and complexity.The existence of surrounding area may also caused coral reef to turn into a fragile and broken state rapidly, since coral is well grown near warm ocean water surface and close to the land.As changes occur in maritime condition, air or land that interacted to the ocean may affect the life of coral ecosystem (Buddemeier and Kinzie, 1976).Indonesia has around 50.000 km 2 of coral reef ecosystem that spread out through the National maritime.It holds approximately 80.802 ton/km 2 /year of healthy fishery resources potential.
Damages of coral reef are mostly caused by large scale of exploits.The main threat of coral reef is overwhelming catch of fish, destructive ways of catching fish, sedimentation and pollution from land.Human activities nowadays are predicted to cause 88% harm to South East Asia coral reef, threatening vital biological and economical value to the society.Around 50% of the threatened coral reefs are in the high or very high level of threat.Only 12% of those are in the fair level of threat (Burke et., al. 2002).
In this work, attempts have been made to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore.In addition, differences investigated also done to recognize coral reef condition on Binor, Paiton, and Probolinggo regency seashore.
Research Time and Place
This research conducted on July 2011.Generally, the research is located on Binor, Paiton, and Probolinggo regency seashore.Map of the research site displayed by Figure 1.
Research Tools and Material
The tools used in this research are scuba diving equipment, High Pressure Compressor, Deep Scan, Underwater Camera photo, Roll Meter (for underwater transect), Underwater stationery, Coral identifier book and GPS to pinpoint the location.Material used in this research is the coral reef that found on Binor Paiton-Probolinggo Seashore.
Research Method
The method used in this research is Line Intercept Transect (LIT) method.LIT method is a method that frequently used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line.Community classified by using category of life form that gives a descriptive display from the morphology of coral community.LIT also used to observe coral reef condition in details by using permanent transect line.(English et al., 1994).Klasififikasi lifeform ada di Tabel 1. LIT operational procedures according to (English et al., 1994) are as follow: 1. Observer represented by two people; one person assigned to make the transect, while the other one assigned to take note of any coral life form category found.2. Transect made in two kinds of depths (3 and 10 meters).Transect length is 100 meters.Transect line made by spreading roll meter in its centimeters (cm) scale.3. Observer must be familiar and fully comprehend the classification of coral growth form, whether it is a living coral or any other biota.4. Observer swim from point zero up to the point of 100 meters by following transect line made and take note of all coral life form in the area passed by transect line.
Every life form should be noted of its width (until centimeters scale).Life form category may refer to AIMS (English et al., 1994) or COREMAP. 5.If possible, observer may also indentify observed coral type in minimum up to its taksa genus.
Coverage percentage of each coral life form category according to (English et al., 1994) may be found by using the following formula; Coverage percentage for all categories of living coral life form according to (English et al., 1994) may be found by using the following formula; (Source : Gomez and Yap, 1984) Evaluating a health or condition of coral reef ecosystem does not only based upon coral coverage percentage, because there may be two areas that have the same living coral coverage percentage, while their damage levels are different.This damage level is related to how vast the change from a living coral into a dead coral.Dead coral ratio may be found through coral mortality index with the following calculation (English et al., 1994) :
Result and Discusion
Electric Steam Power Plant (PLTU) Paiton is one of State Electricity Company (PLN) development project that has already built on April 1998.PLTU Paiton located on the beach outskirts of Binor Village Paiton District Probolinggo Regency East Java Province.There is a river or canal that is called Malikan canal around 6 km on the west side of this power plant.The effect of this river emerges on rainy season, especially on binor area, where it is directly in borderline to its outfall.Before founded, PLTU Paiton area was a mangrove spot with sand subtrate that is likely yield coral reef.The coral reed found on PLTU area is a new-born one.Observation and measurement are done in two kinds of depths, which are 3 meters and 10 meters depth.
Observation and measurement of coral reef are done around PLTU Paiton near the lighthouse.This location is formerly a coral transplantation area that is likely started 5-10 years ago, since there are remains of transplantation shelves buried underneath the complexity of branching coral structure.This transplantation shelves found from a shallow depth (2 meters) up to 10 meters depth.Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65 %.While the rest are dead coral that is only 1,45 %, other life form in 23,2 %, and non-life form in 17,7 %.On the other hand, Mortality Index (IM) of coral reef resulted in 24,5% (Figure 3).A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching (Figure 4).
Coral reef observation and measurement results on that seashore are dominated by Hard Coral in Acropora Branching (ACB) with coral reef coverage percentage of 39%, Coral Massif (CM) with coral reef coverage percentage of 2,85%, Coral Foliose (CF) with coral reef IM : 24,5% coverage percentage of 1,6%, and Coral Mushroom (CMR) with coral reef coverage percentage of 8,5% (Figure 5).Observation and measurement at 10 meters depth show that coral reef coverage is very wide, up to 63,33 %.Seventy five percent of living corals found at 10 meters depth are dominated by Acropora branched coral, and the rest 25,21 % are filled by Acropora tabulate coral and non-Acropora coral with branched life form, massive, sub-massive, foliose, and mushroom, where coral Mortality Index (IM) reached 28,5% (Figure 6).The high number of coral reef coverage percentage on Paiton is caused by successful coral transplantation and low activity of society in this location (Figure 7).
The domination of large size Acropora branching coral were estimated comes from a few types, showing that coral resulted from transplantation has grown large and form a complex 3 dimension structure that is suitable for the life form of fish and living benthic (Figure 8).But this domination also caused coral classification capital getting smaller.The success of coral transplantation program on this area is also supported by the small activity of fishery, so coral may grow well without any significant disruption.Aside from living coral, 25% substrates at 10 meters depth are covered by dead coral (Figure 9).Dead coral found are mostly in branched or massive form, and the death is likely caused by competing position and shading.Acropora coral, especially in branched form, may grow well and fast vertically or horizontally.This caused smaller corals often giving up in competing space with the larger ones or getting less sunlight as it is closed by the coral network structure above.The large complexity of 3 dimensional substrate formed by branched coral caused only a few life form that may be observed along transect
Suggestion
Several coral reef rehabilitation activities have been conducted in East Java.Some of these activities have brought results, as it seen in Paiton.Hopefully, this rehabilitation activity does not short-lived, but becoming part of the long-run activity.Various additional researches are also needed to validate the effectiveness of this rehabilitation activity from ecology, social, and economy aspect for the long-term planning.With clear rules, several rehabilitation sites may be functioned as one of the diving or swimming place for tourists.
Figure 2 .
Figure 2. Data record model of coral life form
Figure 4 .
Figure 4. (a) Coral reef condition on Paiton at 3 meters depth, and (b) the remains of coral reef transplantation shelves.
Figure 7 .
Figure 7. Subtrate condition on Paiton seashore at 10 meters depth that filled by hard coral.Domination occur on few coral species.
).Though generally, variety of life form may be seen hiding below branched coral.
1.
Research site on Binor Paiton-Probolinggo seashore has the highest coral coverage percentage in 63%.The high number of coral reef coverage percentage on Paiton is caused by successful coral transplantation and low activity of society in this location.2. At 10 meters depth, 25 % substrates are covered by dead coral.Dead coral found are mostly in branched or massive form, and the death is likely caused by competing position and shading.
Table 1 .
Life form list and each code Branching Encrusting Submassive (digitate)
Table 2 .
Coral Coverage Condition Criteria Based on Living Coral | 2018-12-07T20:14:05.453Z | 2016-02-18T00:00:00.000 | {
"year": 2016,
"sha1": "165e65294f127f537589db5e02a666438ee42bcf",
"oa_license": "CCBY",
"oa_url": "https://ojs.unud.ac.id/index.php/jmas/article/download/18949/12414",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "165e65294f127f537589db5e02a666438ee42bcf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
171094136 | pes2o/s2orc | v3-fos-license | Using sibling data to explore the impact of neighbourhood histories and childhood family context on income from work
Previous research has reported evidence of intergenerational transmissions of neighbourhood status and social and economic outcomes later in life. Research also shows neighbourhood effects on adult incomes of both childhood and adult neighbourhood experiences. However, these estimates of neighbourhood effects may be biased because confounding factors originating from the childhood family context. It is likely that part of the neighbourhood effects observed for adults, are actually lingering effects of the family in which someone grew up. This study uses a sibling design to disentangle family and neighbourhood effects on income, with contextual sibling pairs used as a control group. The sibling design helps us to separate the effects of childhood family and neighbourhood context from adult neighbourhood experiences. Using data from Swedish population registers, including the full Swedish population, we show that the neighbourhood effect on income from both childhood and adult neighbourhood experiences, is biased upwards by the influence of the childhood family context. Ultimately, we conclude that there is a neighbourhood effect on income from adult neighbourhood experiences, but that the childhood neighbourhood effect is actually a childhood family context effect. We find that there is a long lasting effect of the family context on income later in life, and that this effect is strong regardless the individual neighbourhood pathway later in life.
Introduction
There is an emerging body of literature that highlights the importance of taking into account the neighbourhood in which an individual grew up as a means to understand their later in life trajectories. Empirical evidence suggests that there is a correlation between the neighbourhood types experienced during childhood and the neighbourhoods where one lives in adulthood [1,2,3,4,5,6]. Other studies show that the neighbourhood environment experienced during PLOS childhood has a causal and long-lasting influence on adulthood socio-economic status outcomes, such as income [7,8,9,10,11,12]. The size of the effects of the childhood neighbourhood on individual outcomes is unclear, both in absolute and relative terms. When reviewing literature on neighbourhood effects on children's outcomes, Ginther and colleagues [13] found effects which varied from substantial to almost non-existent. They argued that one explanation for these disparities could be that models of neighbourhood effects often do not control for family characteristics, which can result in biased outcomes. There are some examples of studies which argue that neighbourhood effects are very small or even non-existent when taking the family context into account (for instance see [14]). It is notable that family effects on children's outcomes are generally found to be substantially higher than neighbourhood effects [15]. This paper contributes to the discussion on the relative importance of childhood and adulthood neighbourhood experiences in explaining later-in-life income from work, by explicitly taking the childhood family context into account. To disentangle family and neighbourhood context effects, we employ a sibling design in which the incomes of full siblings are compared. Full siblings share a substantial part of their genes, and are often raised under similar circumstances. They also share childhood neighbourhood histories and, importantly, parental motivations for moving to certain neighbourhoods. This implies that by using a sibling design, any potential selection effects related to the family's entry into the childhood neighbourhood are effectively negated. We compare outcomes for full siblings to a control group of what we call 'contextual siblings'; these are unrelated individuals who we randomly paired together, but who share their childhood neighbourhood. Comparisons between the two groups allow us to distinguish between effects on later-in-life income that are due to family context or childhood neighbourhood context. Contrary to much previous work on neighbourhood effects, but in line with the (Scandinavian) economic literature using sibling comparisons, we find that the childhood neighbourhood has only a very limited effect on future income whereas the family context plays a major role.
Neighbourhood and family effects
Neighbourhood effects arise due to critical spatial context exposures that affect individual life opportunities through a set of transmission mechanisms [16]. Although the residential neighbourhood does not represent the full range of exposures that an individual experiences [17], it acts like an access point through which many other contextual spaces are accessed. Hence, geographic variation in the local spatial opportunity structure [18] not only concerns the neighbourhood but also higher geographic levels in which the neighbourhood is situated within (for example, school attachment areas, city districts, the municipality etc.). There is a vast literature analysing how neighbourhood exposures affect individual life opportunities. This literature includes outcomes such as indicators of socio-economic status, school performance, health, cognitive abilities, behaviours etc., and encompasses studies from different countries and cities, using different methodological approached and data sets, as well as varying neighbourhood definitions. Most of these studies find evidence of neighbourhood effects (there are however also examples of studies finding no effects at all; see [19,20,21]). Studies have also found neighbourhood effects to vary by individual characteristics [22,23], spatial scale [24,25] and length of exposure to certain neighbourhood types [23,26,27].
The issue of timing of exposure has also been found to be central in understanding neighbourhood effects. Using experimental data from the Moving to Opportunity programme, Chetty and colleagues [7] demonstrate that moving from a high-to a lower-poverty area before the age of 13 is associated with increased college attendance, and higher earnings and lower risks of single parenthood later in life. It should, however, be noted that the scale of neighbourhood used by Chetty and colleagues was far greater than is usually deployed in the neighbourhood effects literature. Similarly, Galster and Santiago [8] find that children perform better (measured at age 18) if being exposed to higher-performing neighbours at a younger age. The results by Chetty et al and Galster and Santiago suggest that at least part of the neighbourhood effects are temporally lagged and long lasting (see also [11,12,28]). This is confirmed in a study by Hedman and colleagues [10], who find for Sweden that the parental neighbourhood affects the incomes of children up to at least 17 years after leaving the parental home. A study by Sharkey and Elwert [29] suggests that children's cognitive ability is influenced by the neighbourhood of their parents, even though the child has never lived in the area him/herself. This transmission is suggested to operate through long lasting effects on parents which are then affecting the outcomes of their children.
Sharkey and Elwert [29] argue that their finding of 'multigenerational effects' provides evidence of multigenerational neighbourhood effects. Studies from both sides of the Atlantic have reported multigenerational continuity in the neighbourhood environment; children living with their parents in deprived areas are more likely to reside in similarly deprived neighbourhoods as adults than others [2,3,4,5,6]. This literature argues that the neighbourhood environment is transmitted across generations in a similar way to other features of socio-economic status, via mechanisms such as inherited financial opportunities, transmission of norms and values, transmission of housing preferences and restrictions common to both parents and children (for example, belonging to a minority group). Hence, the choice (or lack thereof) of neighbourhood in adulthood is affected by both childhood neighbourhood experiences and the childhood family context. This conclusion was confirmed by Manley and colleagues [30] who compare residential neighbourhood careers of siblings to unrelated individuals originating from the same neighbourhoods. In accordance with previous literature, they find that neighbourhood status is reproduced over time, but add that siblings live more similar lives (in terms of neighbourhood environment) than unrelated individuals. The authors suggest that this is due to siblings' shared family context experiences, which in turn influences their future neighbourhood choices.
The findings of long-lasting (from childhood through to adulthood), and multigenerational neighbourhood effects and transmission of neighbourhood status, suggest that it is difficult to separate neighbourhood context effects from family context effects. Both the neighbourhood of residence experienced in adulthood and adult income seem to be directly and indirectly influenced by parental choices and characteristics. Galster and colleagues [9] illustrate this interconnectedness with a "holistic framework" in which they suggest that outcomes of young adults are determined by individual characteristics (both observed and unobserved) and parental characteristics (both observed and unobserved) and that those parental characteristics that remain unobserved may lead to selection biases. Specifically, parental income or wealth, together with a number of other attributes (such as network, cognitive resources, and potentially restricting characteristics), determine the range of neighbourhoods available to them and hence the neighbourhood environments experienced by their children. The same family context variables are also known to influence children's socio-economic outcomes, which makes it difficult to separate neighbourhood effects from family context effects.
A large literature has documented intergenerational similarities in socio-economic characteristics between parents and their children (father-son income correlations are especially common; see [31,32,33,34]). This literature has become increasingly engaged with explaining such intergenerational patterns. The literature provides a number of factors which might explain how parents influence their children's adulthood socio-economic status. The first is related to parental resources. Parental education and income are often regarded as the most important explanations of socio-economic outcomes for children (see [33]). These parental resources are also known to affect children's health-awareness, fertility behaviour (timing and number of children), demand for cultural goods and services, and attitudes towards work and education. Another factor influencing intergenerational transmission is that the family context can expose children to various social problems, including exposure to violence, drug and alcohol abuse, criminality and mental illness. Such exposures may affect children's outcomes through transmittance of behaviour (where parents act as role models) or developmental problems. A third broad factor is the household environment, including family structure, parental style and norms and values. Empirical evidence suggest that growing up in a single-parent household or in a large family is positively associated with school drop-out and future unemployment (e.g. [35,36]). Björklund and colleagues [14] test the effects of a number of different family traits and find that parental involvement in children's school work, parental firmness, maternal patience (willingness to plan ahead and postpone financial gains), and the number of books in the parental home are all significant factors explaining the 'family effect' when comparing incomes of siblings. Mason [37] argues that intergenerational transmission of family norms and values are significantly related to children's future socio-economic outcomes, controlling for parents' socio-economic status. Norms, attitudes and values can be transmitted through a learning process within the family, but behaviours or attitudes may also be affected by genetic composition which obviously is (partially) transmitted between parents and their (biological) offspring. Genetic composition has been demonstrated to affect, among other things, cognitive abilities, personality traits [38,39] and risk-taking behaviour [40] which all are likely to affect future income levels.
The usage of siblings to separate family and neighbourhood effects
Using a sibling design has been argued to be a promising approach to separate neighbourhood context effects from family context effects, although such a design is not used very often [14]. Within pairs of genetically related individuals, who also share a similar family background (siblings), many of the unmeasured influences on individual outcomes can be controlled for. If siblings are sufficiently close in age, they will have experienced a similar household environment, and it can be assumed that they have also been exposed to the same family norms, values and attitudes. They will also have similar childhood neighbourhood experiences, at least in terms of their residential locations. Any sibling correlation can thus be assumed to represent a joint effect of shared family and community characteristics [34]. Sibling correlations in income are generally found to be about 0.45 for the U.S and 0.25 for the Scandinavian countries [31]. This means that, in the U.S., almost half of the inequality in earnings can be attributable to siblings' shared background.
To separate family and neighbourhood effects, an outcome variable can be decomposed into a family and a neighbourhood component, where the first is based on sibling comparisons and the second on comparing neighbouring but unrelated children. Studies using such a design generally find that the neighbourhood context is relatively unimportant, at least in comparison with the family context. Several U.S. and U.K based studies find neighbourhood correlations in the range of 0.1-0.2 while family correlations tend to be at least twice as high [15,41,42,43]. Using Swedish data for 13,000 individuals born in 1953, Lindahl [44] estimates the relative significance of family and childhood neighbourhood for school performance, educational attainment and income. Lindahl finds sibling correlations to take on values between 0.17 (income, females) and 0.43 (education, females), whereas the highest neighbourhood correlation found, unadjusted for parental background characteristics, was below 0.08 (education, males). Adjusted for parental background characteristics, neighbourhood correlations dropped to below 0.03. Equally weak neighbourhood correlations (well below the levels of the U.S. and U.K.) have been reported by Brännström [45] for Sweden, and for other countries such as Norway [46], and for Toronto, Canada [47]. These studies nuance the findings of many neighbourhood effects studies which do not take the family context into account.
A different methodological approach that has been widely used for sibling designs is the family-fixed effects modelling framework. As noted previously, a key challenge within the neighbourhood effects literature is to remove bias due to (own or parental) sorting. This is commonly achieved by using fixed-effects. By combining fixed-effects with a sibling design, it is possible to difference out all time-invariant family-related unobservable characteristics that would otherwise risk biased estimates, if correlated with both the residential sorting of families and the outcome of choice (see Aaronson 1998 for a more thorough description). Family-fixed effects are also common in studies of intergenerational social mobility, where the aim is to isolate causal influence of the family (rather than neighbourhood) on outcomes like income or school performance. Aaronson [48] uses family fixed-effects for a sample of U.S. siblings from the Panel Study of Income Dynamics (PSID) and finds some evidence of neighbourhood effects when controlling for family-specific unobservables. Using the same data source, Vartanian and Walker Buck [49] model childhood and adolescence neighbourhood effects on adult income. Like Aaronson, they find evidence of such effects for children of all age groups. Even very young children (aged 0-4) would, according to Vartanian and Walker Buck, benefit in terms of future incomes from living in more advantageous neighbourhood environments. Contrary to these results, Plotnick and Hoffman [20] find neighbourhood effects only when controlling for family observed variables but not when using the family fixed-effect approach, which also removes family-related unobservables. Like Aaronson and Vartanian and Walker Buck, they base their study on PSID data but they restrict their sample to young adult women. Plotnick and Hoffman conclude that selection due to unobservable factors is really important and warrants caution, while also stating the possibility that their results are sample-specific. However, these studies suffer from a shortcoming that is recognized by both the authors and critical readers: by requiring variation in neighbourhood exposure between siblings, obtained through selecting siblings of different ages, gender or any other key characteristic, the design risks violating the assumption of similarity in family background. Siblings born further apart have a higher risk of being raised under different circumstances. If the unobservable family characteristics are correlated with sibling differences in neighbourhood exposure, and also affect the dependent variable (Aaronson [48] mentions ability, ambition and parental expectations as potential such candidates), the 'family effect' will not be completely removed.
Sibling designs have also been used to isolate neighbourhood effects within the field of health. One example is a study by Merlo and colleagues [50] in which they analyse the relationship between (adult) neighbourhood exposure and the risk of ischemic heart disease. They use a dataset of Swedish-born brothers and calculate average exposure to low-income neighbourhoods for each pair of brothers, intended to capture the brothers' joint exposure. To capture individual trajectories, they also calculate how each individual departures from this joint family mean. Both these variables (family mean and individual departure) are used to estimate the relative impact of family and adult neighbourhood trajectory using a multilevel modelling strategy. The authors conclude that the intra-family correlation is much higher than the intraneighbourhood correlation. In fact, they find that the latter is very small, in the order of 1.5%. They also show that the estimated neighbourhood effect gets much smaller when taking the exposure of the full brother into account.
In the current paper we adopt an analytical strategy which is similar to Merlo and colleagues [50]. We analyse the residential trajectories of siblings within a multilevel framework, and we calculate joint sibling exposure and individual departures, and compare these results to an 'individual model' where the residential trajectory is based solely on individual experiences. Unlike Merlo and colleagues, but in line with the literature on intergenerational social mobility, we also compare the siblings to a group of randomly paired unrelated individuals, but who originate from the same neighbourhood; which we call 'contextual siblings'. Using this design we are able to isolate the part of the variation in the outcome variable that is due to family context and childhood neighbourhood context, while also getting estimates for the effects of exposure of the adulthood neighbourhood trajectory.
Data and methods
This paper is part of a project funded by the European Research Council (ERC). As part of the granting procedure, the project proposal was evaluated by both the Delft University of Technology institutional ethics committee, and the ERC ethics committee. Both committees approved the project. The data used for this study are derived from GeoSweden, a register based longitudinal individual level micro-database owned by the Institute for Housing and Urban Research, Uppsala University. The GeoSweden database is not based on a sample, but it contains the entire Swedish population tracked from 1990 to 2010. The database is constructed from a set of different annual administrative registers including, demographic, geographic, socio-economic and real estate data for each individual living in Sweden each year. For each person in the dataset it is possible to identify their parents and through them also their siblings. Although the data used cannot be publicly shared, we have made our Stata code available through protocols.io to enhance the reproducibility of our research (http://dx.doi.org/10. 17504/protocols.io.z6af9ae).
Contrary to most previous neighbourhood effect studies using siblings, we are explicitly interested in any long-lasting effects from childhood neighbourhoods on adulthood outcomes. Since the dependent variable is (logged) income from work, including work-related transfers, we need to follow individuals for a sufficiently long time to move beyond the most turbulent years (at the beginning of the labour market career of individuals) where incomes tend to fluctuate. Income from work represents the sum of cash salary payments, income from active businesses, and tax-based benefits that employees accrue as terms of their employment (sick or parental leave, work-related injury or illness compensation, daily payments for temporary military service, or giving assistance to a handicapped relative). For both siblings to have reached a more stable stage in life within the 20-year period for which data are available, we need them to leave the parental home in the beginning of the data period. In practice, we have selected siblings that leave the parental home between 1990 and 1997 (the year of departure we denote as t), which allow us to follow all individuals for 14 consecutive years. The dependent variable, logged income from work, is measured as the average income in years 12, 13 and 14 post leaving the parental home, to reduce biases due to temporary fluctuations. Since the calendar years of these events (year t+12 -t+14) vary among individuals, we have adjusted income for inflation with 1990 as a base year. This leaves us with eleven years over which we can follow individual neighbourhood trajectories (to avoid spurious correlation, we begin measuring the dependent variable one year after we measure the independent ones. Hence, all independent variable are measured at t+11). Our individuals are then around 30 years of age, an age when most should have finished their studies, established themselves on the labour market, settled down and are likely to have started or be starting a family.
We measure neighbourhood exposure in adulthood rather than in childhood (as is done in much previous work). Since both siblings have left the parental home during our period of study, we obtain the necessary neighbourhood variation within sibling pairs almost by default. It is very unusual for two siblings to live in the same neighbourhood environment for eleven consecutive years of independent housing careers. Hence, there is no need to select siblings on the criterion of variation in childhood neighbourhood exposure-rather the contrary since childhood neighbourhood and childhood family variation are associated and the basic idea of the sibling setup is to identify individuals with similar family exposures. Unlike much previous work, we thus select siblings on the criterion of similarity. Only if the siblings are sufficiently similar can we argue that they share family exposure which consequently can be controlled away when comparing later-in-life outcomes of the siblings.
To be included in the data, siblings must meet all the following criteria: i) both siblings are aged between 15-21 in 1990; ii) siblings are born no more than three years apart; iii) both siblings live in the parental home in 1990; iv) at least one of the siblings leaves the parental home between 1991 and 1993; v) the other sibling leaves the parental home at most four years after the first sibling; vi) the siblings are of the same sex. The parental home could be either the mother's or the father's home, as long as both siblings live in the same home. For simplicity, we have restricted the analysis to two siblings per family. In case of multiple sibling pairs within the same family that fulfil the above criteria, we have selected the sibling pair closest in age to maximise similarity of exposure to neighbourhood and family environment and resources. If there were multiple sibling pairs within the same family with the same age difference, we have selected the oldest pair. Analyses are run separately by gender. The above described restrictions have left us with 19,706 males (9,853 male sibling pairs), and 24,924 females, (12,462 female sibling pairs). We acknowledge that the matching process used in the data design is relatively simplistic. We have adopted this approach rather than a more sophisticated approach for both pragmatic and conceptual reasons. Pragmatically, the group of contextual siblings is already substantially smaller than the group of real siblings, and further restrictions risk reducing the group even further. Conceptually, important elements of the family relationship such as genetic similarity and the precise nature of the exposure to family environments and behaviours, is unmeasured anyway, and much of this is already accounted for in the fixed part of our model.
Neighbourhoods are defined according to the SAMS (Small Area Market Statistics) classification scheme, made by Statistics Sweden in collaboration with each respective municipality. The SAMS areas are constructed to be relatively homogenous in terms of housing type, tenure and construction period. Although the usage of administrative areas in neighbourhood effect studies has been critiqued [51], we argue that SAMS areas capture the physical structure of the surrounding environment sufficiently well, and are often used in similar research and maintaining this approach allows our results to be comparable. More importantly, bespoke neighbourhoods (see [24]) are inappropriate here because we need fixed neighbourhood boundaries to be able to construct a control group (the contextual sibling pairs, described later). Our neighbourhood variable of interest is the share of low income individuals in each neighbourhood. We define low income as belonging to the three lowest income deciles based on the national income distribution. For each year and neighbourhood in the data, we calculate the share of low-income earners based among working-age people (20-64), for neighbourhoods with at least 30 inhabitants in working ages. As noted above, all independent variables in our analyses are measured eleven years after having left the parental home. Unlike the other independent variables, neighbourhood exposure is measured cumulatively, over the period 1-11 years after leaving the parental home. As a consequence of this cumulative measure, the neighbourhood variable can, theoretically, take values between 0 and 1100 (where the value 1100 equates to exposure to low-income neighbours for the entire eleven year period). We have not included characteristics of the childhood neighbourhood into our models, and we only provide the childhood neighbourhood variation (see next paragraph on modelling strategy). The reason for not including childhood neighbourhood characteristics in our models is related to the risk of overcontrolling our models. It is likely that childhood neighbourhood characteristics also affect household status later in life, level of education, employment status, housing tenure, and residential neighbourhood trajectory. By controlling for these characteristics, and then childhood neighbourhood characteristics as well, the potential range of neighbourhood effects on income is vastly truncated.
Modelling strategy
We model neighbourhood effects using a multilevel framework: individuals nested in families, nested into childhood neighbourhoods. We adopt this approach given that we wish to identify if the childhood neighbourhood has a lasting impact on later life outcomes (see [28]) and to recognise the clustering at the level of the childhood neighbourhood in the data. The multilevel model provides us with a tool to separate family level variation from childhood neighbourhood variation. Thus, the model setup allows us to take a first step towards identifying a neighbourhood effect that is not confounded by the family context.
The model is written as: Where: ln(inc ijk ) = logged income from work, including work-related benefits, measured as the average over years 12-14 after leaving the parental home X ijk = a range of individual control variables that are time-invariant or not affected by family (age, sex, country of birth) Y ijk = a range of individual control variables that are time-variant and might be affected by childhood family or childhood neighbourhood (household composition, education level, employment status, tenure), all measured 11 years after leaving the parental home N ijk = individual cumulative neighbourhood exposure, measured over the period from leaving the parental home and 11 years onward v k = variation at the childhood neighbourhood level μ jk = variation at the family level e ijk = an individual error term In order to fully benefit from the sibling relationship in our data, we adopt the strategy of Merlo and colleagues [50] and compare the 'standard model' as described above, and which measures neighbourhood exposure at the individual level, to a model where the individual estimate is replaced by two variables. The first of these is estimated at the family level: family mean of cumulative neighbourhood exposure and the second variable measures the individual departure from the family mean. The family mean of cumulative neighbourhood exposure represents the average of adult neighbourhood exposures of the two siblings. Given that the variable takes the neighbourhood pathways of both siblings into account, it implicitly contains familial background aspects shared by siblings that affect their residential paths. Thus, although we cannot directly measure these shared aspects, the family mean variable may well capture effects of shared genetic composition, abilities, temperament, upbringing, norms and values, attitudes, parental guidance, (monetary) support or other tangible and intangible items shared by siblings but not by unrelated individuals.
The second variable, individual departure from family mean, is obtained by subtracting the family mean from the individual exposure. Hence, the variable estimates the extent to which the individual pathway deviates from the shared sibling exposure. A positive value means that the individual has a higher exposure to low income neighbours over the last eleven years than their sibling. We argue that by replacing individual neighbourhood exposure by these two variables, family mean and individual departure from the family mean, we are able to distinguish the family influence from the neighbourhood effect arising from adulthood neighbourhood exposure. Any lingering family influence that affects both siblings similarly should be captured by the family mean whereas the individual departure variable represents the unique pathway of each individual, free from family influence. Thus, our model using these two variables is written as: In Eq 2, the individual cumulative neighbourhood exposure, N ijk , has been replaced by the two variables, family mean of cumulative neighbourhood exposure, F jk , and individual departure from the family mean, I ijk .
As part of our modelling design we create a control group consisting of a set of what we term 'contextual sibling pairs'. These are synthetic pairs who originate from the same neighbourhood and should hence share any advantages (or disadvantages) arising from the place in which they grew up. However, unlike real siblings, they do not share parents, so their familybased upbringing, genes and other family factors differ. The contextual sibling pairs are created by selecting all individuals in the same age range as the 'real' siblings (15-21 in 1990) and ordering them randomly by neighbourhood of origin, father's country background (Sweden, West, Eastern Europe incl. Russia or Non-western countries), and father's income level. We then subject the contextual sibling pairs to the same restrictions as our real siblings: 1) they should be born no more than three years apart; 2) at least one should leave the parental home between 1991 and 1993; 3) they should leave home a maximum of four years apart. All pairs not fulfilling these criteria are deleted. We also delete any real sibling pairs, deriving from either the father or the mother. The randomly paired up individuals are fewer in numbers than the real siblings: 8,300 individuals in 4,150 pairs. If our modelling approach functions properly, the 'family' level variance for the contextual siblings should be close to zero (since they are not related variation here would be erroneous). In addition, we expect that the estimates of the 'family' mean and individual departure from the 'family' mean variables behave differently. The 'family' mean of the contextual siblings should not capture anything else than a simple mean of the neighbourhood paths of two unrelated individuals. We hypothesize that unrelated individuals (the contextual siblings) will experience more variation in terms of their adult neighbourhood paths (since there is no shared family influence that might affect their future neighbourhood choices). As a result, the 'family' mean variable will matter less for the contextual siblings compared to the actual sibling pairs. The individual departure from the 'family' mean variable will, however, likely to be more important for the set of contextual siblings since this variable captures the individual pathway and its influence on work income.
As individual controls, we include variables that are time-invariant and/or unaffected by childhood family or childhood neighbourhood, including age, sex (in the models using mixed pairs) and father's country of birth (Sweden, Western countries, Eastern Europe incl. Russia, Non-western countries). We have chosen to define country of birth through the father since children of immigrants often face similar difficulties as their first-generation immigrant parents, and because many children bear their father's family name which is a strong marker of ethnicity. We also include a range of control variables that are likely to be endogenous as they are causally related to our independent variable, income from work, but also probably affected by family and/or childhood neighbourhood factors. These are partner status (single or living with a partner), whether there are children in the household (no or yes), level of education measured in years (categorised as under 12 years, 12 years, 13-14 years, or 15+ years), employment status (in paid employment or not), and housing tenure (home ownership, tenant-owned cooperative, rental). Not including these control variables in the analysis would risk overestimating the effect of the adult neighbourhood path on income from work. However, if we include these variables, we risk underestimating the true neighbourhood effect. Unfortunately, there is no easy way around this problem so we address it by running our models both including and excluding the endogenous time-variant control variables. Thus, whilst we cannot obtain an exact measure of any 'true' neighbourhood effect arising from adulthood neighbourhood exposure, we can estimate an interval within which any effect is likely to be found. We find this to be a useful solution and one that avoids providing certainty around a statistical estimate which is anything but. Descriptive statistics of all variables included in the analysis is shown in Table 1. Using sibling data to explore the impact of neighbourhood histories and childhood family context on income Table 2 shows the results from our 'individual models'-the models estimating the effect of adult neighbourhood exposure at the level of individuals on income for work, separately for male and female same-sex siblings. Models 2a and 2c includes only characteristics that are not influenced by parents/childhood neighbourhood (age and country of birth). In Models 2b and 2d we add time-varying variables that are known to affect income from work (family composition, education level, employment status, tenure), but which are also highly likely to be influenced by childhood family context and childhood neighbourhood exposure. The share of the variation that can be attributed to the three levels (individual, family and neighbourhood) is instructive. Models 2a and 2c show that for both males and females, only a very small part of the variation in later-in-life income is related to the childhood neighbourhood. Given that we measured income as the average income from work over years 12-14 after having left the parental home and neighbourhood, it is not surprising that the effect of the childhood neighbourhood is low. However, the effect is still present. Variation at the family level is considerably higher and corresponds to between 12% (females) and 14% (males) of the total variation in income. Hence, in line with previous research, we find the family context to be much more important than the childhood neighbourhood context in explaining variation in adult income. Table 2. Results for real siblings, from individual models, using own cumulative exposure to poverty neighbourhoods. Dependent variable = logged income from work.
Predictor variables
Total sum % low-income neighbours over 11 years Using sibling data to explore the impact of neighbourhood histories and childhood family context on income When adding time-varying control variables (Models 2b and 2d), there is no variation in adult income left to explain at the level of the childhood neighbourhood, and family level variation is substantially attenuated (to about 5% of the total variation). This change in variation confirms that the added control variables are correlated with both childhood neighbourhood context and childhood family context. Given the timing of these variables (childhood exposure/experiences must come before any adulthood characteristics), we argue that the results are likely to show a causal pattern where the childhood neighbourhood and childhood family influence later-in-life choices related to family composition and socio-economic status.
Looking at the coefficients of models 2a-d, we find that adulthood neighbourhood cumulative exposure to low-income neighbours has a negative effect on income. The models, without potential overcontrolling (models 2a and 2c), yield coefficient estimates of about -.0021. The average cumulative exposure for males is 350 (see Table 1), which corresponds to a coefficient of -.735, whereas the maximum achieved cumulative exposure is 969 (see Table 1), which corresponds to a coefficient of -2.035. When time-varying control variables are added in models 2b and 2d, the size of the coefficients for adulthood neighbourhood cumulative exposure to low-income neighbours are almost cut by half, for males and females alike.
The control variables work as expected: age is positively associated with income from work while having a father from a country outside of Sweden, especially a non-Western country, is associated with an income penalty. The effects of both age and ethnicity are stronger for females than for men. When adding the time-varying control variables in models 2b and 2d, the effect of ethnicity is substantially reduced while the coefficient for age is relatively stable. Looking at the coefficients of the time-varying control variables of models 2b and 2d we find, not surprisingly, that employment is the most important variable for explaining income from work (note that income from work also includes work-related transferences and that is estimated at a later point in time). Employment status is, of course, highly correlated between consecutive years and work experience also tends to pay off in terms of obtaining a higher income. We find that having children is positively correlated with income for males but negatively for females. A higher education level has a positive effect on income, especially for females, and people living in rented dwellings tend to have lower incomes compared with those in owneroccupied dwellings.
In the family models (models 3a-3d, with each model corresponding to 2a-2d) presented in Table 3, individual cumulative neighbourhood exposure is replaced by the two variables 'family mean in neighbourhood exposure' (measured on the family level) and 'individual departure from the family mean' (measured on the individual level). In model 3a, for same-sex male sibling pairs, we obtain estimates of family mean and individual departure of -.0024 and -.0016 respectively. The family mean coefficient is somewhat larger than the individual effect (estimated in model 2a), suggesting that the family mean variable captures more than the individual-level variable. In other words, the results suggest that the neighbourhood path of the sibling has an effect on an individual's income from work. This effect is likely to be indirect, however, operating through the siblings' joint background which includes both their shared family history and shared childhood neighbourhood.
The individual departure from the family mean variable estimates the effect of the individual adult neighbourhood pathway which is unrelated to the sibship. The negative coefficient of this variable means that an individual who performs 'better' than their sibling (i.e. has a lower cumulative exposure to low-income neighbours) will have a higher income from work, whereas an individual who performs 'worse' than their sibling will earn less. To exemplify how results are affected by taking the sibship into account, we calculate the effect of the family mean and the individual departure combined and compare these to the results for individual exposure from the models in Table 2. We use male siblings and the results from the models without overcontrolling (models 2a and 3a). We have already shown that model 2a estimates the effect of individual cumulative neighbourhood exposure on income to be -0.735 for a male individual with a total mean cumulative exposure of 350. We repeat the exercise using the results from model 3a, for an individual with the same exposure (350) but who has a brother who has experienced either the minimum or maximum exposure to low-income neighbours (see Table 1 for values 50 and 969 respectively). In the brother-with-low-exposure scenario, the mean of the two brothers is 200 ((350+50)/2) and the individual departure for our individual is 150. These estimates give a total effect of about -0.7204 (very similar to the effect in model 2a). By contrast, the brother-with-high-exposure scenario has a total effect of -1.1053. Hence, our family model suggests that the poor performance of the brother, or rather the shared family and/or childhood neighbourhood characteristics that affects both the brother's performance as well as other aspects of life, has a negative effect on income from work.
Apart from the changes in the coefficients related to neighbourhood exposure, models 3a-3d perform similarly to their individual model equivalents (models 2a-2d). The explained neighbourhood level and family level variation are almost identical, reinforcing the conclusion Table 3. Results for real siblings, from family model, using family mean and individual departure from family mean. Dependent variable = logged income from work.
Predictor variables
Family mean sum % low-income neighbours Using sibling data to explore the impact of neighbourhood histories and childhood family context on income that only a very small proportion of the variation can be attributed to the childhood neighbourhood. By contrast, the effect of the family context is considerably more important. Adding time-varying control variables reduce family level variation substantively whereas childhood neighbourhood level variation disappears. Again, we suggest that this is due to causal effects where the variation on these levels are absorbed by the control variables. We have explored repeating the analysis for a group of siblings identified as twins. These results (not shown) generally reinforce our overall conclusions. Using twins however, who arguably are more similar than regular siblings, the family context increases in importance to explain variation in income from work. In the twin model, it explains 34% and 25% of the variation in income for males and females respectively, using the family models without over-controlling whereas any neighbourhood variation is completely lacking. Also, the size of coefficients for family mean increases, suggesting that the sibling (or family) is more important for individual outcomes for twins than for regular siblings, whereas the coefficient for individual departure decreases somewhat.
The results so far suggest that there is an independent effect of adult neighbourhood experiences on later in life income from work. The results also suggest that variation in income from work, 12-14 years after having left the parental home can, to some extent, be explained by differences in family context. The models show only a small amount of variation due to differences in childhood neighbourhood context. However, the family mean variable, which includes the joint cumulative exposure of two siblings after having left the parental home, represents everything that is shared by two siblings-including both the family context and the childhood neighbourhood context. Hence, it is still unclear to what extent family and childhood neighbourhood affect income later in life. To sort this out, we rerun all our models using the control group of the contextual siblings, the randomly paired individuals who are unrelated but originate from the same childhood neighbourhood (see Tables 4 and 5).
Two features of the contextual sibling models (4a-d for individual models, and 5a-d for family models) are specifically noteworthy. First, the results show that the contextual sibling design works as expected. The family variance in the models is zero, as it should be for unrelated individuals who do not share the same family context. Second, the family models with the contextual siblings (models 5a-5d) yield lower coefficient values for the family mean variable compared to the models with the real siblings (models 3a-3d). This probably means that the part of the coefficient that captures the shared family context is absent, which also makes sense as these are not real but contextual siblings. So the models for contextual siblings show that the models for the real siblings are able to capture family context effects. For the contextual siblings we repeat the exercise of calculating the joint effect of family mean and individual departure from the mean for a hypothetical individual with a mean exposure (of 350) and a 'sibling' with low (50) or high (969) exposure. We find that for the real siblings the overall neighbourhood effects are much stronger than for the contextual siblings. We suggest this can be explained by family context effects. For the real siblings we also capture a family effect. Also, we find that for the contextual siblings the effect of having a high poverty exposure sibling is 26% higher than the effect of having a low exposure sibling, while in the model for real siblings this is 53%. We also interpret this difference as a family context effect which is present for the real siblings, but not for the contextual siblings.
Interestingly, in the contextual siblings models the childhood neighbourhood variance is zero. By design, the contextual siblings do share their childhood neighbourhood so any (causal) effects from the childhood neighbourhood on later-in-life income from work should be captured on the neighbourhood level. That we do not find this suggests, contrary to several previous studies, that for this population the childhood neighbourhood has no long-lasting significant effect on income from work. Using real siblings (models 2a and c, and models 3a and c), we did find (a very small) variance to be explained on the neighbourhood level. We suggest that the childhood neighbourhood variance found in these models were actually related to the family context, and the fact that families sort into specific neighbourhoods. Had it really been a neighbourhood effect, we would also find it in the contextual sibling models.
Summary
This paper set out to better understand the effects of childhood neighbourhood context, and adulthood neighbourhood experiences on individual income from work later in life. The paper started with the idea that estimation of these neighbourhood effects is likely to be affected by the influence of the childhood family context. The childhood family sorts children into certain childhood neighbourhoods, affects adult neighbourhood careers, but also affects later in life income from work. Separating these different effects is a major challenge in neighbourhood effects research, because any childhood family effect might bias estimates of independent causal effects on income of childhood and adult neighbourhood experiences.
In this study we sought to overcome the family contextual bias by using a sibling design, supplemented with analyses for contextual sibling pairs as controls. These contextual siblings
Predictor variables
Total sum % low-income neighbours over 11 years Using sibling data to explore the impact of neighbourhood histories and childhood family context on income do not share the childhood family, but do share childhood neighbourhood experiences. Comparing analyses for real siblings and contextual siblings can give greater insight into the different mechanisms at play. The overall results suggest that adult neighbourhood experiences do affect later in life income from work, but that there is no meaningful effect of the childhood neighbourhood context. However, the childhood family context is important in explaining later in life outcomes. These conclusions were derived from four sets of analyses (two for real siblings and two for contextual siblings). We first modelled the effect of individual level neighbourhood experiences on income from work for real siblings ( Table 2). The results suggest that longer term exposure to high poverty neighbours has a negative effect on income from work. However, this model cannot separate this effect from the effect of the childhood neighbourhood and the childhood family context and they may therefore be confounded. Our individual level model shows that there is little variance to explain at the level of the childhood neighbourhood, and that the childhood family context is much more important in understanding income. The Table 5. Results for contextual siblings, from family model, using family mean and individual departure from family mean. Dependent variable = logged income from work.
Predictor variables
Family mean sum % low-income neighbours Using sibling data to explore the impact of neighbourhood histories and childhood family context on income family model for the real siblings shows a family mean effect on income later in life, which is likely to be a combination of the childhood family and childhood neighbourhood context effects. The family model for the real siblings also shows an individual departure from the family mean effect; this can be interpreted as an effect of adult neighbourhood experiences on income from work. The model leaves no childhood neighbourhood variance to explain, but explains the family level variance reasonably well. Next we ran models for the contextual siblings (See Tables 4 and 5). These models were designed to test if the sibling design works, and to assess whether there is an effect of the childhood neighbourhood context on income from work. The models for the contextual siblings show that indeed the sibling setup works well as the sibling models explain nothing at the level of the family, which makes sense as contextual siblings are not real siblings by design. Interestingly, the contextual sibling models also have no variance to explain at the level of the childhood neighbourhood context. This suggests that there is no childhood neighbourhood effect on later in life income from work. The contextual sibling model does show an effect of adulthood neighbourhood experiences on income.
Conclusion
The results suggest that there is an adulthood neighbourhood effect on income from work, net of the effect of the childhood neighbourhood and childhood family context effects. The results also suggest that any effects on later in life income from the childhood neighbourhood context are in fact childhood family context effects. That is not to say that the childhood neighbourhood is not important at all, but likely that the childhood neighbourhood effect is the result of non-random selection of families into neighbourhoods based on family characteristics. Our analyses show that individuals with a sibling who does well in terms of their (adult) neighbourhood pathway (in other words has a low cumulative exposure to low-income neighbourhoods), have a higher predicted income from work compared to individuals with a sibling with a high exposure to low-income neighbourhoods. We interpret this as a family context effect. Those with siblings in low income neighbourhoods are assumed to come from a less resourceful or advantageous family (either in terms of finances, time investments or other unobservable but important traits such as genetics), whereas individuals whose siblings live in better neighbourhoods are assumed to benefit from a more positive family background. Our overall conclusion, therefore, is that the childhood family context has a lasting effect on adult income, even when taking both childhood and adult neighbourhood path into account. Part of what appeared to be a neighbourhood effect was in fact a lasting 'family effect'. For the wider research literature, it is clear that, when possible, models of neighbourhood effects should control for the childhood family context to avoid bias in estimates.
Discussion
A possible limitation of our study is the construction of the contextual sibling pairs. Because of pragmatic and conceptual restrictions we have used a relatively simple way to construct a control group of contextual siblings. Although we had access to full population data, imposing more restrictions on the contextual siblings would reduce the size of the control group further. A larger control group could be constructed in countries with larger populations, or by using multiple cohorts within the data. A further limitation is that the real sibling pairs differ in ways we cannot observe in the data. To reduce these possible differences, a dataset of real (preferably identical) twins could be used, but that requires a dataset with a large number of twins, requiring at least a birth cohort study or preferably a twin study. In these cases we would likely be able to acquire genetic information as well allowing further control of currently unobservable factors. However, using our design, we got the most out of the register data at our disposal.
This study contributes to current debates in the neighbourhood effects literature on differential impacts of similar neighbourhood environments on different people (see [23] and [52]). We add to the discussion of individual heterogeneity by arguing that the overall effect may differ among individuals depending on the characteristics of their parental family background and former neighbourhood experiences. Although the family background is not deterministic in any sense-for instance, individuals may perform well despite coming from a less advantageous family background, or do relatively badly in terms of neighbourhood path despite having a resourceful family-the childhood family context generally has a lasting effect on individual income later in life. These results were acquired using data from Sweden, a country that provides relatively good opportunities for individuals to 'move up' on the social ladder in terms of both income and neighbourhood path. Although there is indeed a link between family background and individual performance (see [44] on socio-economic status; [4] on neighbourhood status), it should be easier to undertake upward social mobility in terms of neighbourhood status in counties characterized by relatively high levels of income equality, such as Sweden, than in more liberal welfare regimes. Hence, it is likely that the 'family effects' found in this paper are stronger in other types of societies. | 2019-06-01T13:10:48.393Z | 2019-05-30T00:00:00.000 | {
"year": 2019,
"sha1": "b49b52cbda6185a926c38cebc09e25b258177eb1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0217635&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b49b52cbda6185a926c38cebc09e25b258177eb1",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
196666919 | pes2o/s2orc | v3-fos-license | Heterosis studies for yield and agronomic traits in Thai upland rice
The exploitation of heterosis and heterobeltiosis are the promising way for raising yield potential in crops. Twenty-eight F1 hybrids and their eight parents were evaluated to estimate the heterosis and heterobeltiosis of yield and other
agronomic traits in Thai upland rice. Significant differences of analysis of variance were observed for all studied traits, indicating the existence of worth genetic variability among the hybrids and their parents. The highest significant positive
heterosis and heterobeltiosis was attained by Dawk Pa-yawm × Hawm Mali Doi for number of tillers (90.59%; 58.82%) and number of panicles plant-1 (60.35%; 46.14%), and panicle length (heterobeltiosis: 20.05%), but highest significant negative
heterosis for plant height (-8.90%). Likewise, Nual Hawm × Khun Nan showed the highest significant positive heterosis and heterobeltiosis for yield components, viz., number of filled grains panicle-1 (57.39%; 52.25%), spikelet fertility (25.01%;
21.16%), 1000 grain weight (heterosis: 12.85%) and grain yield plant-1 (heterosis: 19.86%), but highest significant negative for days to flowering (-17.52%; -6.03%) and days to maturity (-12.00%; -4.91%). These crosses were recommended as the most
promising combinations to gain early favorable segregants and developing high yielding upland rice hybrid varieties by heterosis breeding.
INTRODUCTION
Rice (Oryza sativa L.) is the paramount staple food crops, but its production tends to decrease due to the shrinking of the potential wetland. It is might be solved by the cultivation of upland rice in the dryland area. However, its productivity stays sluggish around 1 t ha -1 . Therefore, hybrid varieties are a current strategy in an attempt to improve upland rice production, by utilizing heterosis or hybrid vigor (Sari et al., 2019). Rice is naturally a selfpollinated crop, but strong heterosis is observed. Heterosis and heterobeltiosis is the phenomenon in which an F 1 hybrid has superior performance over its mid-parent and better parent, respectively (Virmani et al., 1982).
Both negative and positive heterosis are helpful for crop improvement, depending on the breeding targets. Generally, negative heterosis is desirable for early maturity and positive heterosis for high yield (Nuruzzaman et al., 2002). Heterosis breeding is a principal genetic appliance that can facilitate yield enhancement by 30-40% and contributes to raising the desirable qualitative and quantitative traits in cultivated plants (Srivastava, 2000). Various degree of heterosis and heterobeltiosis on cultivars and elite lines of rice were observed by Nevame et al. (2012); Ten important agronomic and yield traits were recorded viz., plant height (cm), days to flowering, days to maturity, number of tillers plant -1 , number of panicles plant -1 , panicle length (cm), number of filled grains panicle -1 , spikelet fertility (%), 1000 grain weight (g), and grain yield plant -1 (g). Observations were recorded on randomly selected ten plants in both of hybrids and parents for these studied traits. For accuracy of the present study, the true hybrid of F 1 upland rice plants was identified and clarified by simple sequence repeat (SSR) markers by used twenty set of rice SSR primers (Table 2) at Plant Molecular Biotechnology Laboratory, Faculty of Natural Resources, Prince of Songkla University, Hat Yai, Thailand.
RESULTS AND DISCUSSION
The authenticity of F 1 upland rice hybrids: Twenty set of rice SSR primers were surveyed on the eight parents to identify the segregation pattern among them. Out of these, six set of rice SSR primers viz., RM 1, RM 5, RM 44, RM 144, RM 215 and RM 510 produced single band marker which clearly distinguished the parents. Specific SSR primers to verify the hybrid authenticity of F 1 upland rice plants for each cross combination in the present study are presented in Table 3.
Analysis of variance:
The genotypic difference among Thai upland rice genotypes was confirmed by analysis of variance of the recorded data on different indicated traits (Table 4). The results showed that there were highly significant differences among genotypes, among parents, and among hybrids in all studied traits. The difference among parents indicated that each of them had different characters and they were appropriate for genetic and hybrid studies. The genotypic differences among hybrids indicated that they were eligible to further analysis i.e., the estimation of heterosis ; NP = number of panicles plant -1 ; PL = panicle length; NFG = number of filled grains panicle -1 ; SF = spikelet fertility; 1000-GW = 1000 grain weight; GYP = grain yield plant -1 ; CV = coefficient of variation; ** = significant at 1%; * = significant at 5%; ns = non-significant. and heterobeltiosis, because different hybrid will show different characters. The significant differences of parents vs. hybrids in all studied traits (except for plant height, number of filled grains panicle -1 and 1000 grain weight), indicated that the pair of parents and hybrids will expose different characters which had significant heterosis and heterobeltiosis. The coefficient of variation (CV) was less than 17% for each trait indicating the accuracy of data obtained.
Mean performance: Mean values of parents and their hybrids are given in Table 5. There was a high variation of data that confirmed worth genetic variability in both of parents and hybrids group, indicated that different genetic systems were involved in controlling traits, and also emphasized the important study of these traits.
Heterosis (H) and heterobeltiosis (Hb):
The degree of H and Hb in this study was varied among crosses and traits (Table 6), which in accordance with Alam et al. (2004) and Singh et al. (2011) who observed the varying degree of heterosis and heterobeltiosis for yield and its attributes in upland rice hybrids.
Negative direction of H and Hb was desired for plant height, DP × HMD showed the highest significant negative heterosis (-8.90%), followed by DP × KN (-8.62%). Meanwhile, significant negative heterobeltiosis was observed only in the hybrid KN × GML (-7.15%). Thus, these hybrids can be used to generate semi-dwarf varieties in the next breeding programs. Negative heterosis for rice plant height in several crosses was notified by Nuruzzaman et al. (2002) and Alam et al. (2004). The partial harvesting of lodged plants and increasing of diseases and pests can reduce the quantity and quality of grains. Hence, breeders prefer the plants with stiff culms and short height, besides semi-dwarf plants are high yielder due to increased tillering ability, resistance to lodging and better responsiveness to nitrogen fertilizer (Saleem et al., 2008). Rahimi et al. (2010) also reported the presence of the significant negative correlation between plant height and rice grain yield. So, obtaining semidwarf plants was one of the important factors in the rice breeding program.
Development of high yielding early maturing varieties is the main target in rice breeding programs. Regarding the characters days to flowering and days to maturity, significant negative heterosis was observed in seven and nine crosses, respectively. Among 28 crosses, the highest significant negative heterosis (-17.52%; -12.00%) and heterobeltiosis (-6.03%; -4.91%) was observed in NH × KN for both of these traits, indicated an over-dominance type of gene action was considered for it, while DP × HMD exhibited significant negative value (-11.63%; -7.04%) only over its mid-parent indicating partial dominant type of gene action. Thus, these cross combinations were suggested as a chance for developing early maturity varieties. Negative heterosis for earliness of flowering and maturity in hybrid rice was also observed by Aananthi and Jebaraj (2006). ; NP = number of panicles plant -1 ; PL = panicle length; NFG = number of filled grains panicle -1 ; SF = spikelet fertility; 1000-GW = 1000 grain weight; GYP = grain yield plant -1 ; ** = significant at 1%; * = significant at 5%; Bold indicates the highest significant value for each trait in desirable direction. ; NP = number of panicles plant -1 ; PL = panicle length; NFG = number of filled grains panicle -1 ; SF = spikelet fertility; 1000-GW = 1000 grain weight; GYP = grain yield plant -1 ; ** = significant at 1%; * = significant at 5%.
The magnitude of hybrid vigor was highest for number of tillers plant -1 . The heterosis and heterobeltiosis values ranged from -18.24 to 90.59% and -49.30 to 58.82%, respectively ( Table 6). The highest significant positive heterosis (90.59%) and heterobeltiosis (58.82%) being in DP × HMD due to over-dominant type of gene action. However, Virmani et al. (1982) reported increasing of grain yield was not fully affected by highly positive heterosis for tillers number due to an increased number of spikelets per unit area indicated that some tillers were not the productive tillers. It was occurred in this research, the hybrid DP × HMD had the highest heterosis and heterobeltiosis value for number of tillers plant -1 but had not the highest value for other yield attributes, such as number of filled grains panicle -1 and spikelet fertility. The present findings are similar to earlier reports of Bagheri and Jelodar (2010) who reported a hybrid had maximum significant positive heterosis and heterobeltiosis for number of tillers plant -1 but had not the highest value for other yield traits, such as panicle length and number of spikelets panicle -1 .
With regard to number of panicles plant -1 , positive heterosis is desirable. DP × HMD had the highest significant positive heterosis (60.35%) and heterobeltiosis (46.14%). Generally, an increasing number of productive tillers plant -1 followed by increasing of number of panicles plant -1 (number of tillers greater than number of panicles), but this is not ensure the highest grain yield because depend on others yieldrelated traits, like panicle length, number of filled grains panicle -1 , spikelet fertility, etc. The observation of heterosis and heterobeltiosis for number of tillers and number of panicles plant -1 traits by Rashid et al. (2007) and Rahimi et al. (2010) were also similar, that most of the investigated crosses had the significant positive value which number of tillers value was greater than number of panicles plant -1 .
In rice, long panicle with more of filled grains provides an opportunity for more yields, so positive heterosis is desirable for panicle length character. Out of 28 crosses, the significant positive values were observed in 13 crosses over mid-parent and six crosses over better parent with the maximum value being attained by NH × KN (32.21%) for heterosis and DP × HMD (20.05%) for heterobeltiosis, indicating partial dominant type of gene action for both of these hybrids. Conformable result was reported by Nevame et al. (2012) and Patil et al. (2012) that positive heterobeltiosis was identified for panicle length indicates the genes from the parents that controlling its related traits interacted favorably and resulted in positive grain yield heterosis in most hybrids.
The hybrid NH × KN had maximum significant positive heterosis and heterobeltiosis for number of filled grains panicle -1 (57.39% and 52.25%) and spikelet fertility (25.01% and 21.16%). Shanthi et al. (2006) and Sadimantara et al. (2014) reported that several crosses had significant positive value for these traits. However, these results are in contrary with Joshi (2001) who reported that positive significant standard heterosis and heterobeltiosis were absence for spikelet fertility percentage in some crosses. It was supposedly because the pollen parents of these sterile hybrids might not have restorer genes.
The maximum significant positive heterosis of 1000 grain weight being observed in NH × KN (12.85%) but for heterobeltiosis there were not significant positive value in all crosses, it was alleged because of the ANOVA results in a source of variation parents vs. hybrids was not significant difference for this trait. Moreover, the highest significant positive heterosis of grain yield plant -1 being attained by NH × KN (19.86%), while for heterobeltiosis was in NH × HMD (16.87%). The grain yield has become the main goal in the breeding program as it is interrelated to other traits, so those hybrids of each trait that related to grain yield as discussed previously were identified as the most promising combinations for developing high yielding upland rice varieties. Out of 28 crosses investigated in the present study, six expressed superiority for grain yield over mid-parent and three crosses over better parent (highly significant differences), indicating non-additive gene action plays a role, as was also reported by Reddy et al. (2012). A high percentage of heterosis for grain yield and its components in upland rice was revealed by Alzona and Arrauadeau (1995); and Singh et al. (2011).
Two principal hypotheses have been proposed to explain the genetic basis of heterosis, i.e., dominance hypothesis: heterosis is due to the accumulation of favorable dominant genes in a hybrid derived from two parents (Davenport, 1908) and over-dominance hypothesis: heterozygote (Aa) is more vigorous and productive than either homozygotes (AA or aa) (East, 1936). Epistasis might be a key genetic basis of heterosis in rice as suggested by Li et al. (1997). Earlier studies have shown that heterosis is the result of partial to complete dominance, over-dominance, epistasis, and it might be a combination of all these (Comstock and Robinson, 1952). Bagheri and Jelodar (2010) inferred that over-dominant type of gene action indicated if highly significant in both types of heterosis and higher mean performance, partial dominant type of gene action manifested if significant mid-parent heterosis but non-significant in heterobeltiosis, and if non-significant in both of it representing an additive type of gene action. These results are in conformity with earlier findings by Shanthi et al. (2006) and Rashid et al. (2007) in diverse rice varieties from different origin.
Furthermore, according to Falconer and Mackay (1996), heterosis directly depends on the presence of dominance gene action and its magnitude relies on a magnitude of directional dominance, and indirectly on the interaction implicating dominance effect at different loci and its magnitude depends on the different level of the gene frequency of two parents at all the loci affecting the related trait. Whereas, the gene frequency different derived from the diverse genetic background of the parental lines. Manjarrez et al. (1997) stated that the wide genetic distance between parents will be expanding the gene differences and great potential interaction of genes in the form of dominance and epistasis thus enlarging the potential of heterosis.
Relationships among heterosis and heterobeltiosis of the studied traits: The heterosis of grain yield plant -1 was negatively significant correlated with days to flowering and maturity, whereas there was a highly significant positive correlation of hybrid vigor (heterosis and heterobeltiosis) in most of the studied traits (Table 7). Therefore, the characters number of filled grains panicle -1 , spikelet fertility, and 1000 grain weight were considered major contributors to grain yield of upland rice hybrids in this study, since they showed the highly significant positive correlation in both of heterosis and heterobeltiosis on single plant yield trait. The results were in accordance with the findings by Toshimenla et al. (2016) who concluded that exploitation of heterosis in upland rice is determined by grain yield plant -1 which is contributed by filled grains and grain weight.
CONCLUSION
In an ideal situation, upland rice hybrids with semidwarf plant type, earliness flowering and maturity, high productive tillers and panicles number, high grain yield and its contributing traits are preferable. Keeping in view of mean performance, heterosis and heterobeltiosis value, and the correlations of them, two most promising crosses viz., Dawk Pa-yawm × Hawm Mali Doi and Nual Hawm × Khun Nan can be considered as the Thai upland rice F 1 hybrids to gain early favorable segregants and developing high yielding upland rice hybrid varieties. It also indicates that some traits, such as number of filled grains panicle -1 , spikelet fertility, and 1000 grain weight were highly positive correlation with grain yield and essential to the efficiency of upland rice breeding programs. | 2019-05-12T13:39:17.476Z | 2019-01-04T00:00:00.000 | {
"year": 2019,
"sha1": "92928743bf3c744c1df275873bf62e4adb97310a",
"oa_license": null,
"oa_url": "https://doi.org/10.18805/a-390",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c95fa0915772ffd6e44110df44d42293984148b0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
225870846 | pes2o/s2orc | v3-fos-license | DESCRIPTION OF KNOWLEDGE ABOUT CHANGES IN THE MENTSTRUAL CYCLE IN INJECTING CONTRACEPTIVE ACCEPTORS IN PLOSO
Introduction: Injection contraception is one of the most common pregnancy prevention methods in Indonesia because it works effectively, is practical in use, is relatively cheap and safe. The results of an initial survey of 8 injectable contraceptive acceptors who experienced changes in the menstrual cycle of amenorrhoea and spotting in Ploso buden Village, Deket Subdistrict, Lamongan District, were found to be 75.00% which concerned the changes in the menstrual cycle. The purpose of this study is to find out the description of knowledge about changes in the menstrual cycle in injecting contraceptive acceptors in Ploso Buden Village, Deket District, Lamongan Regency. Methods: This study used a descriptive design, with a population of 97 people and a sample of 53 people. The sampling used is purposive sampling. Data collection with a closed questionnaire. Data processing and data analysis by editing, scoring, coding, tabulating and presented in the form of narration then drawing conclusions. Results: Based on the results of the study, the majority (54.7%) of injectable contraceptive acceptors were knowledgeable about changes in the menstrual cycle and amenorrhoea and spotting. Consulsion: The reference of this research to increase knowledge about changes in the menstrual cycle is the role of health workers is very important can provide counseling, distribute pamphlets and posters. A R T I C L E I N F O Recived 30 December 2019 Accepted 13 May 2020 Online 29 May 2020 *Correspondence: Lailatul Fadliyah
INTRODUCTION
Current contraceptive methods are promoted by the government to limit population growth. In order to achieve the goal of health development to improve the degree of public health, various efforts have been made in public health services, including family planning services (Abdul Bari Saifuddin, 2011).
There are still many people who have not yet gotten the right information about the benefits of family planning or family planning, so many myths spread that need to be straightened out, which when using contraception can cause among other thing: cancer, facial acne, black spots on the face, and not very effective in preventing pregnancy.
With the development of the family planning program launched by the government, contraception is growing. Various choices of contraception are offered to the public. From the simple method of construction to the permanent or stable. The simple method is to start pills, injections, implants, contraception in uterus or IUD until steady contraception is vasectomy for men and tubectomy for women.
All contraceptive methods have side effects and effectiveness. Injecting contraception is one of the most common pregnancy prevention methods by people in Indonesia because it works effectively, its use is practical, the price is relatively cheap and safe (Hanifa Winkjosastro, 2010).
The injection contraceptive works to thicken the mucus of the uterus so that it is difficult to be penetrated by sperm. In addition, injection contraception also helps prevent eggs from sticking to the uterine wall so that pregnancy can be avoided (Benson, Ralph C, 2009).Side effects of injection contraception are often found in the community.
Not a few of the contraceptive acceptors who do not know the complaints or side effects of injection contraception, even though they have been following or using injection contraception for a long time. The use of injection contraception has side effects including changes in the menstrual cycle including amenorrhoea and spotting, increase or decrease in body weight, nausea, dizziness, and vomiting (Abdul Bari Saifuddin, 2011).
Amenorrhoea and spotting occur mainly during the first few months of use, but this is not a serious problem, and usually does not require treatment. If spotting continues or after no menstruation, but then bleeding occurs, it is necessary to look for the cause of the bleeding. It should be remembered that the causes of abnormal bleeding in users of these contraceptives are very rare compared to bleeding outside the cycle and blood spots or spotting associated with the method itself (Glacier Anna, 2012).
According to 2009 data in the Lamongan Health Office, which took active birth control totaling 228,821 people, and those who used injection contraception totaled 124,810 people (54.54%). While others use other contraceptives, including: IUD, MOP / MOW, Implant, Pills, and Condoms. Data in the Deket Health Center for active family planning participants amounted to 7,351 people, and those who used injection contraception totaled 3,949 people (53.72%).
Preliminary survey data conducted in February 2010 in Ploso Buden Village, Deket Subdistrict, Lamongan Regency, from 10 injectable contraceptive acceptors received side effects from injecting contraceptive use, including 5 people or 50.00% who experienced amenorrhoea, 3 people or 30 , 00% experienced spotting, and 2 people or 20.00% experienced weight gain.
And from the injection contraceptive acceptors who experienced changes in the menstrual cycle, namely amenorrhea and spotting 6 people or 75.00% who were concerned about changes in the menstrual cycle, while 2 people or 25.00% did not question the changes. From the above data it can be seen that most injectable contraceptive acceptors are concerned about changes in the menstrual cycle and amenorrhoea and spotting. It can be identified factors that influence injection contraceptive acceptors that are concerned with changes in the menstrual cycle, namely knowledge, education, family roles, health worker roles, work and age.
As a first factor, knowledge of injectable contraceptive acceptors. Knowledge is the result of knowing, and this happens after people have sensed a certain object. Sensing occurs through the human senses, namely the sense of sight, hearing, smell, taste and touch. Most of human knowledge is obtained through the eyes and ears (Soekidjo Notoatmodjo, 2012).
Thus the more they hear, see and feel, especially if he wants to try it, then he will gain a lot of knowledge but if he has never made an effort to feel or see and hear about important information, then he will certainly experience ignorance of all things including side effects of injection contraception. This condition makes the acceptors worry about the change in the cycle.
Another factor that can influence knowledge is injecting contraceptive acceptor education. Education is defined as any planned effort to influence others, whether individuals, groups or communities, so that they do what is expected by the education actors (Soekidjo Notoatmodjo, 2012). The higher the level of education, the more likely they are to obtain and capture information provided that is positive. And vice versa the lower the level of education, then it may be difficult for them to capture information and ideas including changes in the menstrual cycle.
A role is a set of behavior expected by another person towards someone according to their position in a system (Wahit Iqbal Mubarok, 2005). The family is the smallest unit of society consisting of the head of the family and several people who gather and live in a place under one roof (Sudiharto, 2007).
Families who support injection contraception acceptors in the face of changes in the menstrual cycle, the possibility of injection contraception acceptors will feel more confident about the contraception chosen, even injecting contraception acceptors do not feel worried about changes in the menstrual cycle.
On the other hand, for injection contraception acceptors who do not have family support, then the possibility of self-confidence in the use of injection contraception will disappear even injecting contraception acceptors will feel worried about the changes in their menstrual cycle and also make injectable contraception acceptors drop out without using contraception.
The role of officers is as a model in clean, healthy, and cultured behavior and guiding someone to solve health problems (Sudiharto, 2007). The higher the concern of health workers in providing health education, the injection contraception acceptor is not worried because the changes in the menstrual cycle is a natural thing. Conversely, if health workers do not provide education about changes in the menstrual cycle, acceptors will hesitate and can also cause acceptors to drop out. Because health workers have a very important role in providing guidelines relating to the side effects of injecting contraceptive use.
Work, work environment can make someone gain experience and knowledge both directly and indirectly (Wahit Iqbal Mubarok, 2005). So the experience and knowledge they gain can be used as consideration for making decisions. Whereas someone who lives with daily activities or work will tend to ignore the health condition, so they cannot recognize the problem from the start. By increasing age, a person will experience changes in physical and psychological aspects (Wahit Iqbal Mubarok, 2007). If someone's age is too easy it can be said to have less experience, so knowledge will be lacking. While getting older, the knowledge will increase along with the increase in their life experiences.
The impact of the lack of knowledge about menstrual cycle changes in injecting contraceptive acceptors, one of which is the acceptor feels uncomfortable on him because there are changes in his body. Causing the acceptor to feel afraid of having a pregnancy or disease. If the knowledge of injecting contraceptive acceptors is adequate, then they will not worry about continuing their family planning program, so that the impact on birth rates can be suppressed.
The choice of contraceptive method to be chosen should be that the acceptor needs good and correct consideration. Therefore, before making a choice, prospective acceptors should consult family planning doctors, midwives or competent health workers. To increase knowledge of family planning acceptors.
the role of health workers as educators is expected to help provide information or counseling about the problems of injecting contraceptive acceptors who experience changes in the menstrual cycle.
This knowledge can be provided by health workers through, counseling or counseling to injecting contraceptive acceptors about changes in the menstrual cycle through community activities such as Posyandu, PKK or when acceptors come to health care providers, so that acceptors become more confident in using injectable contraceptives, especially knowing how to effectively overcome problem of side effects in the use of injection contraception.
The husband as a life partner also has an important role in channeling and providing emotional or psychological support to injecting contraceptive acceptors. Husband and wife as acceptors can work together to decide the right and safe method. If there are impacts or side effects, they will be able to understand each other and give the right decision to overcome them.
From the above description, the authors are interested in conducting research on the knowledge of injectable contraceptive acceptors who experience changes in the menstrual cycle spotting and amenorrhoea.
MATERIALS AND METHODS
This type of research is descriptive using a purposive sampling method. the population was all injecting contraceptive acceptors in Ploso Buden Village, Deket Subdistrict, Lamongan District in May 2010 with a total of 97 people while the sample was a part of injecting contraceptive acceptors in Ploso Buden Village, Deket Subdistrict, Lamongan District in May 2010 who met the inclusion criteria of 53 people. The variable is knowledge about changes in the menstrual cycle in injecting contraceptive acceptors. Data collection using questionnaire sheets and data processing using editing, scoring, coding, tabulating and presented in the form of narration then drawing conclusions. Nearly 43,4 % had a high school education and a small proportion of 7.7 % had a tertiary education. Nearly some of the respondents did not work 43.4 % and a small portion worked as civil servants 3.7 %. Respondents knowledge is more than adequate (54.7 %) and a small proportion of good knowledge is 12.1 %.
DISCUSSION
This chapter will present the results of research conducted to find out knowledge about changes in the menstrual cycle in injecting contraceptive acceptors in Ploso Buden Village, Deket District, Lamongan Regency. Based on table 4.3 shows that most respondents have sufficient knowledge about Changes in Menstrual Cycles. Good knowledge in knowing about changes in the menstrual cycle is caused by several things, namely age, education, and work of injecting contraceptive acceptors.
According to Wahit Iqbal Mubarok (2005) one of the factors that influence knowledge is age. With increasing age, there will be changes in physical and psychological aspects (mental). In this study all injectable contraceptive acceptors studied were aged 20 -35 years, which at that age was included in adulthood. With increasing age more information is obtained and more experience. But in reality many have enough knowledge. That can be caused because it is not balanced by the inadequate information obtained. Besides age, another factor that can influence injectable contraceptive acceptors with sufficient knowledge is education. Based on table 4.1 it shows that most of the injectable contraceptive acceptors have high school education and a small proportion have college education.
The higher the education the injection contraceptive acceptor the younger will get and capture the information provided that is positive. And vice versa the lower the level of education, it is difficult for them to capture information and ideas including about changes in the menstrual cycle of amenorrhoea and spotting which is a natural thing for the use of injection contraception.
Education is defined as any planned effort to influence others, whether individuals, groups or communities, so that they do what is expected by the education actors (Soekidjo Notoatmodjo, 2012).
According to Soekidjo Notoatmodjo (2012) knowledge is the result of tofu, and this happens after people have sensed a certain object. Sensing occurs through the human senses, namely the sense of sight, hearing, smell, taste and touch.
Most of human knowledge is obtained through the eyes and ears. The higher the knowledge, the more accepting injection contraception can receive side effects from injection contraception. Vice versa, the lower the knowledge can cause injectable contraceptive acceptors who droup out because they do not understand even know that changes in the menstrual cycle of amenorrhoea and spotting are physiological from the use of injection contraception.
The more they hear, see, feel first want to try it, they will gain a lot of knowledge but if they have never made an effort to feel or see and hear about important information, then it is certain that they will experience ignorance of all things including about the side effects of contraception. injection. Another factor that can influence knowledge is work. Table 4.2 shows that most injection contraceptive acceptors do not work. Injectors of contraception who live with their daily activities or work will tend to neglect health conditions, so they cannot recognize the problem from the start. While injecting contraception acceptors who work outside the home can get new things and can find and get information about changes in the menstrual cycle from coworkers as a side effect of injection contraception. People who work outside the home can exchange experiences or knowledge with others. The experience or knowledge gained will be more varied so that injecting contraceptive acceptors will feel confident that menstrual cycle changes are not a sign of an illness.
In accordance with the opinion expressed by Wahit Iqbal Mubarok (2007) that the work environment can make a person gain experience and knowledge both directly and indirectly.
Beside above factors, the knowledge of injecting contraceptive acceptors about changes in the menstrual cycle can be influenced by interests, experience, culture, and information. However, due to the limitations of researchers, researchers only limit the factors of age, education, and employment.
CONCLUSION
Based on the results of research and discussion as well as research objectives about knowledge about changes in the menstrual cycle in injecting contraceptive acceptors in Ploso Buden Village, Deket District, Lamongan Regency, can be concluded as follows. most respondents have sufficient knowledge about changes in the menstrual cycle spores and spotting. | 2020-07-02T10:14:17.017Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "d9c446e4d688e72eba13556a76c45a25b62f9c6e",
"oa_license": "CCBYNCSA",
"oa_url": "https://e-journal.unair.ac.id/JoViN/article/download/19912/10901",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "38762e6bb85075c8bfaaafd37a6e184fb479bd67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
58008967 | pes2o/s2orc | v3-fos-license | Molecular detection and PCR-RFLP analysis using Pst1 and Alu1 of multidrug resistant Klebsiella pneumoniae causing urinary tract infection in women in the eastern part of Bangladesh
Klebsiella pneumoniae is the second leading causative agent of UTI. In this study, a rapid combined polymerase chain reaction and restriction fragment length polymorphism analysis was developed to identify K. pneumoniae in women, infected with urinary tract infection in the Sylhet city of Bangladesh. Analysis of 11 isolates from women at the age range of 20–55 from three different hospitals were done firstly by amplification with K. pneumoniae specific ITS primers. All of the 11 collected isolates were amplified in PCR and showed the expected 136 bp products. Then, restriction fragment length polymorphism analysis of 11 isolates were conducted after PCR amplification by 16s rRNA universal primers, followed by subsequent digestion and incubation with two restriction enzymes, Pst1 and Alu1. Seven out of 11 isolates were digested by Pst1 restriction enzymes, six isolates digested by Alu1, and while others were negative for both enzymes. Data results reveal that, women at age between 25 and 50 were digested by both enzymes. A woman aged over than 50 was negative while bellow 20 was digested by only Pst1. The results could pave the tactic for further research in the detection of K. pneumoniae from UTI infected women.
Introduction
Klebsiella pneumoniae is the second most potential agent of urinary tract infection after Escherichia coli, however, the pathogenicity is higher than its counterpart [1]. Approximately 12% of UTI infection caused by K. pneumoniae and the number is increasing at an alarming rate all over the world, particularly in Asia, due to spread of antibiotic resistant and extended beta-lactamase strains [2]. Women are eight times more vulnerable to UTI infection due to their position of reproductive organs and many of the infections remain asymptomatic for prolonged period [3]. The incidence rate increases with age, recurrent infections (very common for women), and during pregnancy period [4,5]. In Bangladesh, due to geographical position, weather, food habit, early age pregnancy, and lack of awareness about UTI: the numbers of patients infected by K. pneumoniae have been proliferated in the last couple of years [6,7]. Several researches have been conducted on Escherichia coli associated UTI, but molecular based approach for the detection and analysis of K. pneumoniae causing UTI in women has yet to be developed.
PCR alone or sometimes in combination with RFLP has been extensively used for precise detection and analysis of pathogens for many years [8]. Traditional culture based technologies are time consuming, labor intensive, and sometimes frequent use of antibiotics may affect culture positive isolates thus difficult to interpret data correctly [9]. However, PCR based molecular approaches are independent of antibiotics, more rapid, reliable, and sensitive, thus routinely used as molecular tools for pathogen identification [10]. 16s-23s internal transcribed spacer (ITS) unit of K. pneumoniae facilitating precise identification of this organism by polymerase chain reaction (PCR) [11]. Restriction endonuclease digestion of PCR products enables species determination and analysis of genome variability [12]. The sequence specific RFLP pattern of bacteria amplified from 16s rDNA primers varied widely from species to species, and the conserved sequence likely to be differentiated by PCR-RFLP method [13]. Restriction endonuclease digestion of bacterial DNA by Pst1, Alu1, and Mob1 have been used to confirm etiological agents in some earlier studies [14][15][16].
Multidrug resistant K. pneumoniae cause an emerging health threat worldwide, especially in least developed, and densely populated countries [17]. Current treatment practice commonly prescribe powerful antibiotics resulting spread of multidrug resistant bacteria and thereby reducing therapeutic efficacy [18]. In order to implement a successful treatment strategy for UTI, it is of great importance to know the current antibiotic resistant profile of the causative agents [17,18]. Early detection of K. pneumoniae from UTI could minimize the widespread use of antibiotics in prevention and control programs as well as reduce the medical cost. The objective of this study was to evaluate 16s-23s ITS primer and PCR-RFLP method as a tools for the identification of multi-drug resistant (MDR) K. pneumoniae causing UTI in women.
Collection and culture of bacterial isolates
A total of 11 bacterial isolates were collected from three different hospitals of Sylhet city of Bangladesh: Sylhet MAG Osmani Medical College and Hospitals, Popular Hospitals and Diagnostic Centre, and Jalalabad Ragib Rabeya Medical College and Hospitals. Immediately after collection, isolates were transported to USDA project laboratory of the Department of Genetic Engineering and Biotechnology of Shahjalal University of Science and Technology by maintaining cool chain. Isolates were cultured in ESBL medium and incubated overnight at 37°C. Isolates were then numbered numerically from K1 to K11 for further studies. UTI patient's data ( Table 1) were collected from doctor's consent form and recorded for future analysis.
Genomic DNA extraction
All of the bacterial isolates were streaked in trypticase soy (TCS) agar medium for colony formation and incubated at 37°C for overnight. A single colony was picked and grown over night at 37°C on TCS broth in a shaker incubator for genomic DNA extraction. DNA of 11 bacteria were extracted by following the instructions of commercial genomic DNA extraction kit (Bio Basic Inc., 160 Torbay Road, Markham Ontario, Canada). Additionally, proteinase K and RNase A added after incubation step for purified DNA according to the guidelines of extraction kit. Extracted DNA were quantified by gel electrophoresis with lambda (k-DNA marker as well as in a spectrophotometer as a ratio of DNA-protein absorbance. DNA was then stored at À20°C for further use.
Identification of Klebsiella pneumoniae by PCR
For identification of K. pneumoniae, 16s-23s ITS primer was used to amplify DNA sequence in this study [11]. PCR master mixture was prepared in 50 ml volume containing 25 ml of 2X master mixtures (Fermentus, Gene Ruller TM , USA), 2.5 ml of each forward and reverse primer (Table 2), 5 ml of template DNA (100 ng) and 15 ml of nuclease free water. PCR conditions consisted of an initial denaturation temperature of 94°C for 4 min; denaturation step of 94°C for 1 min, annealing for 1 min at 55°C, and an extension at 72°C for 1.5 min, a final extension step of 72°C for 10 min and 4°C for final storage. A total of 35 serial cycles of amplification reaction was performed in a MultiGene Gradient Thermal Cycler (Labnet International Inc., USA). PCR products were separated on 1.5% agarose gel followed by subsequent staining in ethidium bromide solution and visualized in a gel documentation system.
Amplification of bacterial 16s rDNA by universal PCR
PCR master mixture was adjusted at 30 ml final volume contained 15 ml of 2X master mixtures (Fermentus, Gene Ruller TM , USA), 1.5 ml of each universal 27F forward and 1540R reverse primers (Table 2), 2 ml of template DNA and 10 ml of nuclease free water. Here a total of 30 cycles of reaction was programmed in MultiGene gradient thermal cycler (Labnet International Inc. USA) with an initial denaturation temperature of 94°C for 4 min; denaturation step of 95°C for 1.5 min, annealing for 1.5 min at 58°C for, an extension at 72°C for 1.5 min, a final extension step of 72°C for 5 min, and 4°C for final storage.
Restriction digestion
After 16s rDNA PCR, 10 ml of PCR product was transferred to a separate eppendorf and 18 ml of nuclease free water added. Then, 2 ml of Pst1 and Alu1 restriction enzymes (Table 2) premixed with BSA were added carefully to the solution. Restriction enzyme added samples were then spin gently for few seconds and incubated at 37°C for 2 h in a water bath [12]. Fragments then analyzed in 2% agarose on 10% TBE under UV illumination. A molecular weight marker (1kb DNA ladder, Fermentus, GeneRuller TM , USA) was added for each of the gel run.
Identification of K. pneumoniae isolates
All the bacterial isolates from three different hospitals were initially supplied as K. pneumoniae grown on selective agar media supplemented with ornithine, raffinose and koser citrate [21]. However, for further confirmation, bacterial isolates were assayed for their morphological, physiological and biochemical properties according to Burgey's manual for the identification of K. pneumoniae [22]. All of the isolates were confirmed belong to K. pneumoniae after biochemical tests. The isolates were also confirmed by amplification in PCR with the 16s-23s internal transcribed spacers (Fig. 1).
Antibiotic sensitivity of K. pneumoniae isolates
To know the antibiogram profile of the isolates, we screened ten antibiotics representing different antibiotic groups. All of the isolates showed resistant to ampicillin, erythromycin, chloramphenicol, cephradine, kanamycin and sulphamethaxazole. Sensitivity (approximately 80%) only observed for two antibiotics, ciprofloxacin and levofloxacin. Streptomycin and gentamycin showed sensitivity to only K3 isolate (younger patient), but neither showed 100% sensitivity to all of the tested isolates. Therefore, all the K. pneumoniae isolates were resistant to multiple antibiotics tested in this study (Fig. 4).
Discussion
K. pneumoniae cause a wide variety of diseases in both humans and animals. Among these diseases, urinary tract infection is one of the common that cause serious health threat to women, especially to the pregnant and immunocompromised person [23,24]. In recent years, the prevalence of UTI infection caused by K. pneumoniae has been increased in Asia including Bangladesh [6]. Data results also suggest that women during pregnancy witnessed recurrent UTI infection are also suffered from other bacterial infections (Chlamydia and Mycoplasma) [5]. Although pronounce effects, no experimental data available for molecular detection of UTI causing K. pneumoniae and its 16s rRNA restriction digestion analysis in infected women in Bangladesh. Therefore, this study could be used as a platform for rapid detection, virulence properties, and drug sensitivity pattern of K. pneumoniae associated UTI in women.
Biochemical characterization and other media based identification of K. pneumoniae often give false positive results and required considerable amount of time for confirmation [25]. Therefore PCR method has been widely used for precious detection of pathogens and analysis of their genetic diversity [26,27]. Previous PCR based researches for the detection of K. pneumoniae by 16s-23s internal transcribed spacer although successful for most of the isolates but didn't produce perfection [2,11,26]. The PCR in this study was outstanding in amplification of all the 11 isolates and was so sensitive that it produced reproducible results in repetitive experiments.
Sequence specific enzymatic cleavage of amplified 16s rRNA allows precise and early diagnosis of diseases [28]. In this study, RFLP digestion of 16 rRNA produced a distinct pattern of cleavage, size ranging from 0.5 kb to 0.75 kb. Restriction enzyme Pst1 developed three types of banding pattern; seven isolates were fragmented and produced bands of the same sizes. Enzyme Alu1 produced digestion pattern had close proximity to Pst1; six isolates were cleaved consistently at the same length. Sharma et al. conducted a study on the detection of E. coli and K. pneumoniae from tertiary care hospital of India and performed restriction digestion analysis with EcoR1 and Pst1, where 60% of isolates were positive for Pst1, and gave bands at molecular weight of 150 bp to 750 bp [28]. In present study, we found 64% digestion of K. pneumoniae 16s rRNA, a slightly more sensitivity to Pst1 digestion than the Sharma's study in 2010. This is probably due to similar circulating strains spreading over the South-East Asia [28]. For Alu1, Kalghatgi et al. performed an experiment to differentiate K. pneumoniae from other pathogenic bacteria using some restriction enzymes where 60% of K. pneumoniae isolates were digested by Alu1 and showed band at 476 bp, 220 bp and 65 bp [29]. In this study, 63.4% of isolates were sensitive to Alu1 and displayed similar banding pattern like earlier study (476 bp and 220 bp) [29].
Another significant finding of this study was the homogenous banding pattern of isolates from pregnant (K7) women and recurrent UTI (K4, K5 and K8) patients, suggesting a common evolutionary origin for all of these isolates [28]. In addition, patient's data reveals that these samples came from the same hospital (Popular Hospitals and Diagnostic Centre) and community (slum). Moreover, the isolates were resistant to all of the commercial antibiotics tested. Therefore, environmental factors and food habit might play some role for recurrent infection and drug resistant pattern [27,31].
The growth curve for antibiotic resistant pattern has been constantly increased at an alarming rate in Bangladesh. In 2012, 60% of UTI causing K. pneumoniae were resistant to commercial drugs [31]. However, in 2016, resistancy pattern increased by 20% to common antibiotics used to treat UTI [27]. Present study found over 90% of the isolates were resistant to multiple antibiotics tested. The availability and frequent use of antibiotics possibly responsible for this upward resistance pattern in Bangladesh [7].
Finally, rather treating K. pneumoniae associated UTI by antibiotics, we have to put more emphasis on early detection methods. Antibiotics based common treatment strategy add further complicacy to UTI patients [31]. Therefore, a rapid and precise molecular method needed to address the problem, and PCR-RFLP here could be a simple, selective, and cost effective alternative of the traditional culture based method in Bangladesh.
Acknowledgement
The present study was conducted under the research project titled ''Virulent gene targeting and analysis of virulence factors of K. pneumoniae causing pneumonia and urinary tract infection (UTI) in Bangladesh" funded by Ministry of Science and Technology, Bangladesh (MOST). | 2019-01-22T22:33:29.942Z | 2018-01-04T00:00:00.000 | {
"year": 2018,
"sha1": "93257c731a751dd43473d56372c72bb74db7e7eb",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jgeb.2017.12.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93257c731a751dd43473d56372c72bb74db7e7eb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
73674441 | pes2o/s2orc | v3-fos-license | A coding theoretic approach to the uniqueness conjecture for projective planes of prime order
An outstanding folklore conjecture asserts that, for any prime $p$, up to isomorphism the projective plane $PG(2,\mathbb{F}_p)$ over the field $\mathbb{F}_p := \mathbb{Z}/p\mathbb{Z}$ is the unique projective plane of order $p$. Let $\pi$ be any projective plane of order $p$. For any partial linear space ${\cal X}$, define the inclusion number $i({\cal X},\pi)$ to be the number of isomorphic copies of ${\cal X}$ in $\pi$. In this paper we prove that if ${\cal X}$ has at most $\log_2 p$ lines, then $i({\cal X},\pi)$ can be written as an explicit rational linear combination (depending only on ${\cal X}$ and $p$) of the coefficients of the complete weight enumerator (c.w.e.) of the $p$-ary code of $\pi$. Thus, the c.w.e. of this code carries an enormous amount of structural information about $\pi$. In consequence, it is shown that if $p>2^ 9=512$, and $\pi$ has the same c.w.e. as $PG(2,\mathbb{F}_p)$, then $\pi$ must be isomorphic to $PG(2,\mathbb{F}_p)$. Thus, the uniqueness conjecture can be approached via a thorough study of the possible c.w.e. of the codes of putative projective planes of prime order.
1 Introduction dual up to isomorphisms. Note that the dual X * of a partial linear space X is again a partial linear space iff each point of X is incident with at least two lines of X . In this case, we have X * * = X .
Finally, a projective plane is an incidence system X such that (i) X is a linear space, (ii) its dual X * is also a linear space, and (iii) given any two distinct lines of X , there is at least one point of X which is non-incident with both lines. Note that, in the presence of (i) and (ii), the condition (iii) is equivalent to its dual condition (iii) * : given any two distinct points of X , there is at least one line of X which is non-incident with both points. Thus, the dual of a projective plane is a projective plane.
If x 1 , x 2 are distinct points of a projective plane, we shall denote by x 1 ∨ x 2 the unique line incident with both x 1 and x 2 . Dually, if ℓ 1 , ℓ 2 are distinct lines of a projective plane, then ℓ 1 ∧ ℓ 2 denotes the unique point incident with both ℓ 1 and ℓ 2 . Note that, for any non-incident point line pair (x, ℓ), y → x ∨ y defines a function from the set of all points on ℓ to the set of all lines through x. Clearly the function m → m ∧ ℓ is its inverse. So these two functions are bijections. These bijections are the so-called perspectivities in the projective plane. The existence of these perspectivities may be used to see that if π is a finite projective plane, then there is a number n ≥ 2 such that (a) each point of π is incident with exactly n + 1 lines, (b) each line of π is incident with exactly n + 1 points, (c) the total number of points of π is n 2 + n + 1, and (d) the total number of lines of π is n 2 + n + 1. This number n is called the order of the finite projective plane π.
Examples (1) The field planes. Let V be a three dimensional vector space over a field F. For i = 1, 2, let V i be the set of all idimensional vector subspaces of V . We identify each element ℓ of V 2 with the set of all elements of V 1 contained in ℓ. With this identification, P G(2, F) := (V 1 , V 2 ) is a projective plane, called the projective plane over F. With a little care in handling non-commutativity of multiplication, this construction generalizes to yield the projective plane P G(2, D) over any division ring D. Recall that, by a famous theorem of Wedderburn, the finite fields F q (q prime power) are the only finite division rings. Specializing the above construction, we get the classical finite projective planes P G(2, F q ), of order q.
(2) The free projective plane. The usual definition of the free projective plane may be found in [9]. We like to rephrase this definition slightly, as follows. We first define a sequence X n = (P n , L n ), n ≥ 1, of partial linear spaces by induction on n. X 1 is the incidence system whose points and lines are the vertices and edges of the 4cycle. That is, Having defined X n , we extend it to X n+1 by introducing a new point ℓ 1 ∧ ℓ 2 corresponding to each pair {ℓ 1 , ℓ 2 } of lines of X n such that no point of X n is incident in X n with both ℓ 1 and ℓ 2 , and introducing a new line x 1 ∨ x 2 corresponding to each pair {x 1 , x 2 } of points of X n such that no line of X n is incident in X n with both x 1 and x 2 . The point ℓ 1 ∧ ℓ 2 is incident in X n+1 with the lines ℓ 1 , ℓ 2 and with no other line. The line x 1 ∨ x 2 is incident in X n+1 with the points x 1 , x 2 and with no other point. The incidences between the old points and old lines are as in X n . Thus, by construction, each Clearly F is an (infinite) projective plane. It is called the free projective plane. An easy induction shows that each X n is self-dual (i.e., isomorphic to its dual). It follows that (like the field planes) the free projective plane is self dual.
A projective plane σ is said to be a subplane of the projective plane π if σ is a subsystem of π. A projective plane π is said to be prime if it has no proper projective subplane. The projective planes over prime fields F (i.e., F = Q or F = F p for some prime p) are obvious examples of prime projective planes. It is not hard to see that the free projective plane F is also prime. (Indeed, using the above construction, one sees that the automorphism group of F is transitive on the 4-cycles -isomorphic copies of X 1 -in F. The same is true of P G(2, F), F a prime field). In a private conversation with the author some years back, N.M. Singhi forwarded: One of the reasons why the free projective plane is important is the fact that every prime projective plane is a homomorphic image of F. (Indeed, given a prime projective plane π, one may readily find a monomorphism f 1 of X 1 into π. Hence one may inductively construct a homomorphism f n of X n into π such that f n+1 extends f n for each n. Then f := n≥1 f n is a homomorphism of F into π. Since π is prime, f must be an epimorphism.) This indicates that an in-depth study of the partial linear spaces X n may be an useful approach to Singhi's conjecture. Note that, in particular, if π is a finite prime projective plane, then it follows from the above that π is a homomorphic image of X n for sufficiently large n. We suggest that an investigation of the codes of X n (over various primes) may be fruitful. But we do not even know a formula for the number v n of points (= number of lines) of X n . The sequence {v n } begins with 4, 6, 7, 9, 13, 33, . . ..
When q is a "genuine" prime power (i.e., q = p e , p prime, e ≥ 2) and q > 8, there are many constructions of projective planes of order q which are not field planes. But no such construction is known for q = p. The subject of this paper is the following Conjecture 1.2. (folklore): Up to isomorphism, P G(2, F p ) is the only projective plane of prime order p. Conjecture 1.2 bears a superficial resemblance to Singhi's conjecture. But the relationship between these two is far from clear. We do not even know if a projective plane of prime order must ba a prime projective plane, or if a prime projective plane which is finite must be of prime order. Another related conjecture is due to H. Neumann [11] (which is much stronger than the finite case of Singhi's conjecture): she conjectured that a finite projective plane has no projective subplane of order two (if and) only if it is isomorphic to P G(2, F q ) for some odd prime power q.
In the humble opinion of this author, the uniqueness conjecture 1.2 is one of the most beautiful and important open problems in mathematics. It is amusing as well as sad that it finds no mention in the lists of "problems for the new millenium" compiled by various authors at the turn of the century. It is not lacking in history and pedigree. With some imagination, one may trace the history of such problems in finite geometry back to Euler's 1782 paper [6] on the problem of the thirty six officers. The projective planes P G(2, F p ) were first constructed by von Staudt [12] in 1856. They were generalized to the planes P G(2, F q ), q prime power, by Fano [8] in 1892. The first examples of non-field finite projective planes were constructed by Veblen and Wedderburn [13] in 1907. The conjecture 1.2 must have occurred to these early authors. The vast literature on finite projective planes include (usually as special cases) many characterizations of P G(2, F p ) on the assumption of moderately large automorphism groups. See [9] and [5, Chapter 5] for many of these results.
Coding theory
One very fruitful approach to problems in finite geometry has been through the study of codes attached to these geometries. For a comprehensive account of these connections, the reader may consult [1].
If P is a finite set and p is a prime number, then consider the F pvector space F P p consisting of all functions f : P → F p . For any such f , the support of f is the set {x ∈ P : f (x) = 0}, and the Hamming weight |f | of f is the size (cardinality) of the support of f . The type of f is the p-tuple (a α : α ∈ F p ) where a α = #{x ∈ P : f (x) = α}. F P p is a metric space with the Hamming metric given by d(f, g) = |f − g|. One also equips F P p with the usual non-degenerate symmetric bilinear form ·, · defined by f, g = x∈P f (x)g(x).
A p-ary code C is a linear subspace of F P p , viewed as a metric space with the Hamming metric inherited from F P p . The vectors in C are called the words of C. The elements of P are called the co-ordinate positions of C. The minimum weight of C is defined to be the number min{|f | : f ∈ C\{0}}. The dual code C ⊥ of C is the orthocomplement of C with respect to ·, · . Thus Here we have used the usual notation Note that the complete weight enumerator (c.w.e.) of C carries much more information about the code than the Hamming weight enumerator. Indeed, F may be obtained from G by the substitutions The Hamming weight enumerator enumerates the frequencies of various Hamming weights occurring in the code, while the complete weight enumerator enumerates the frequencies of the various types. Finally, note that -since ·, · is nondegenerate -we have the usual formula relating the dimensions of C and C ⊥ : dim(C) + dim(C ⊥ ) = n. There are also beautiful formulae giving the Hamming weight enumerator (and, more generally, the c.w.e.) of C ⊥ in terms of the corresponding enumerator of C. While these formulae are extremely important, we shall have no occasion to use them in this paper.
Let X be a finite incidence system and p be a prime. Let P be the point set of X . For any line ℓ of X , we consider its indicator function ℓ : P → F p given by ℓ(x) = 1 if x ∈ ℓ, and ℓ(x) = 0 if x ∈ ℓ. Note that we have used the same letter to denote a line and its indicator. Thus, lines of X are also code words in F P p . The p-ary code C p (X ) of X is defined to be the vector subspace of F P p spanned by all the lines of X .
If π is a finite projective plane of order n, then it is easy to see that C p (π) is trivial when p does not divide n. However, when p divides n, C p (π) ∩ C ⊥ p (π) is of co-dimension one in C p (π) (this intersection, the so-called "hull", is actually spanned by the pairwise differences of the lines of π). Also, when p divides n, the minimum weight of C p (π) is n + 1, and the minimum weight words of C p (π) are precisely the non-zero scalar multiples of the lines of π. When p exactly divides n (i.e., p | n but and dim(C p (π)) = n+1 2 + 1. See [1] for the proofs of these results.
In particular, if π is a projective plane of prime order p, 2 + 1, and the minimum weight words of C p (π) are the non-zero scalar multiples of lines. Also, Inamdar [10] proved that, in this case, the minimum weight words of C ⊥ p (π) are precisely the non-zero scalar multiples of the pairwise differences of lines of π.
These results apply, in particular, to the p-ary code of P G(2, F p ). In [4], the present author proved that the first four minimum weights of the p-ary code of P G(2, F p ) (p ≥ 5) are p + 1, 2p, 2p + 1, and 3p − 3.
In an earlier paper [7], Fack et al had proved that the only words of weight p + 1, 2p or 2p + 1 (in this particular code) are the non-zero F plinear combinations of pairs of lines in P G(2, F p ). However, a complete classification of the words of Hamming weight 3p − 3 in C p (P G(2, F p )) remains an open problem. We shall return to this question in the concluding section of this paper.
A series of Lemmas
In this section we prove a number of lemmas culminating in Lemma 3.9 which will be used in the next section. Our first lemma is well known.
Lemma 3.1. Let P be the point-set of a projective plane π of prime order p. Let w ∈ F P p . Then w ∈ C p (π) if and only if w, ℓ = w, 1 for all lines ℓ of π. (Here 1 is the constant function 1 on P .) Proof: This means that w ∈ C p iff w is orthogonal to all the words in the set {1 − ℓ : ℓ a line of π}. This is true since this set spans C ⊥ p .
Lemma 3.2. Let X be a finite partial linear space and let p be a prime. Then dim(C p (X * )) = dim(C p (X )).
That is, the rows and columns of N are indexed by P and L respectively, and -for x ∈ P, ℓ ∈ L -the (x, ℓ)th entry of N is = 1 if x ∈ ℓ and = 0 otherwise. If we view N as a linear operator from F L p to F P p , then C p (X ) is precisely the image of N . Therefore dim C p (X ) = rank p (N ). Now note that the transposed matrix N * is the incidence matrix of X * , so that we also have dim C p (X * ) = rank p (N * ). As rank p (N * ) = rank p (N ), the result follows.
Lemma 3.3. Let π and σ be two projective planes of prime order p. Suppose π and σ share at least p 2 + 1 lines. Then π = σ.
Proof: Let L 0 be a set of p 2 + 1 lines common to π and σ. Since π has p 2 + p + 1 lines, and each point of π is in p + 1 lines, it follows that the union of the lines in L 0 is the entire point set of π. Similarly for σ. So π and σ have the same point set, call it P . Let C 0 be the subcode of F P p spanned by L 0 . Thus C 0 is a subcode of both C p (π) and C p (σ). Consider the incidence system X = (P, L 0 ). Thus, C 0 = C p (X ). Consider the restriction map ρ : F L p → F L 0 p given by w → w | L 0 , where L is the full set of lines of π. ρ is a linear map which restricts to a linear map from C p (π * ) onto C p (X * ). The kernel of this restricted map consists of the words w of C p (π * ) with support(w) ⊆ L\L 0 . But #(L\L 0 ) = p and C p (π * ) has no non-zero word of Hamming weight ≤ p. Therefore, the kernel is trivial and so ρ restricts to a vector space isomorphism between C p (π * ) and C p (X * ). In conjunction with Lemma 3.2, this yields dim(C p (π)) = dim(C p (π * )) = dim(C p (X * )) = dim(C p (X )). Since C 0 = C p (X ) is a subcode of C p (π), it follows that C p (π) = C 0 . Similarly, C p (σ) = C 0 . Thus, C p (π) = C p (σ). But the lines of π are precisely the supports of the minimum weight words of C p (π), and similarly for σ. Therefore π and σ have the same set of lines as well. Hence π = σ. Definition 3.4. We shall say that an incidence system Y is p-admissible if it satisfies (i) Y has exactly p 2 + p + 1 points, (ii) each line of Y is incident with exactly p + 1 points, and (iii) any two distinct lines of Y are together incident with a unique point.
Thus, any p-admissible incidence system is a partial linear space. Note that any set of lines in a projective plane of order p may be viewed as the set of all lines of a p-admissible incidence system. Lemma 3.5. Let p ≥ 2. Let σ be a p-admissible system. Then σ has at most p 2 + p + 1 lines. Equality holds iff σ is a projective plane of order p.
Proof: Fix a point x of σ. Then the lines of σ through x, minus the point x, are pairwise disjoint subsets of size p each in the set of p(p + 1) remaining points. So each of the p 2 + p + 1 points x is in at most p + 1 lines. But each of the lines of σ contains exactly p + 1 points. So, a two-way counting shows that σ has at most p 2 + p + 1 lines.
Now suppose σ has p 2 + p + 1 lines. Then the above argument shows that each point x is in exactly p + 1 lines. Therefore the lines through x induce a partition of the remaining points. So, if y = x is another point, then a unique line joins x and y. Since x was arbitrary, this shows that any two distinct points of σ are together in a unique line of σ. Now fix two lines ℓ 1 = ℓ 2 of σ. Let x be the common point of ℓ 1 and ℓ 2 . Since p + 1 ≥ 3, there is a third line ℓ through x and there is a point y = x on ℓ. Then y is non-incident with both ℓ 1 and ℓ 2 . So σ is a projective plane of order p. Lemma 3.6. Let S be the union of k ≥ 1 lines of a p-admissible incidence system. Then (p + 1)k − k 2 ≤ #(S) ≤ pk + 1.
Lemma 3.7. Let Y and Y ′ be two p-admissible incidence systems. Suppose the union of some m lines of Y equals the union of some k lines of Y ′ . If k 2 < p then m = k.
Proof: Let S be a set which is the union of m lines of Y as well as the union of k lines of Y ′ . Since p > k 2 , Lemma 3.6 implies that (k − 1)p + 1 < (p + 1)k − k 2 ≤ #(S) ≤ mp + 1. Therefore m ≥ k. Suppose, if possible, that m > k. Then S is the union of k lines of Y ′ and S contains the union of k + 1 lines of Y. Therefore, Lemma 3.6 implies that we have (k + 1)(p + 1) − k+1 2 ≤ #(S) ≤ kp + 1. Hence p ≤ k 2 . But this contradicts our assumption. x i = k then x i = 1 for all i.
Proof: It suffices to show that if 0≤i<k x i ≤ k then x i = 1 for all i. We prove this by induction on k. The result is trivial for k = 1. So assume k > 1. Note that 0≤i<k 2 i x i = 2 k − 1 implies that x 0 is odd. In particular, x 0 ≥ 1. We define k − 1 non-negative integers y i , 0 ≤ i < k − 1, as follows. y 0 = 1 2 (x 0 − 1) + x 1 , and y i = x i+1 for 1 ≤ i < k − 1. We have 0≤i<k−1 x i ≤ k. Therefore, the induction hypothesis implies that y i = 1 for 0 ≤ i < k − 1. That is, x i = 1 for 1 < i < k, and x 0 + 2x 1 = 3. So, either x 0 = x 1 = 1, or x 0 = 3, x 1 = 0. But, in the latter case, we get 0≤i<k x i = k + 1, contrary to our assumption. So x i = 1 for all i. This completes induction.
Lemma 3.9. Let p be a prime, and Y be a p-admissible incidence system with exactly k lines. Enumerate the lines of Y (in any order) as Let π be a projective plane of order p, and suppose w ′ is a word in C p (π) such that type(w ′ ) = type(w). If p ≥ 2 k , then there are lines Proof: For integers i ≥ 0 and x ≥ 0, let δ i (x) denote the i th digit in the binary expansion of x, counting from the right, and taking the rightmost digit as the 0 th .
Let P and Q be the point sets of π and Y, respectively. For 0 ≤ i < k, define (Here we have identified the elements of F p with the integers 0, 1, . . . , p− 1.) Notice that, since w = 0≤i<k 2 i ℓ i , and p ≥ 2 k , we also have for 0 ≤ i < k. Further, by the definition of ℓ ′ i , we have As type(w ′ ) = type(w), there is a bijection f : Q → P such that w = w ′ • f . It follows that for x ∈ Q, and 0 ≤ i < k, Thus f (ℓ i ) = ℓ ′ i for 0 ≤ i < k. Therefore, if Y ′ denotes the incidence system with point set P and lines ℓ ′ i , 0 ≤ i < k, then f is an isomorphism between Y and Y ′ . In consequence, Y ′ is also p-admissible, so that ℓ ′ i are sets of size p + 1 each, and any two distinct ℓ ′ i 's meet at a unique point.
Thus, to complete the proof, we have to show that the k sets ℓ ′ i are lines of π. So far we have not used the assumption w ′ ∈ C p (π). This assumption will play a crucial role in what follows.
Let S and S ′ be the supports of w and w ′ respectively. Since p ≥ 2 k , we have S = 0≤i<k ℓ i and S ′ = 0≤i<k ℓ ′ i .
Claim 1:
If ℓ is a line of π such that ℓ S ′ , then To prove this claim, first we note that Since w ′ ∈ C p (π), Lemma 3.1 implies that for any line ℓ of π. Therefore we have, as p ≥ 2 k , for any line ℓ of π (inequality in N). Now fix a point y ∈ S ′ . Adding the inequalities (1) over all the p + 1 lines ℓ through y, we get Since the two extreme terms here are equal, we must have equality throughout this argument. Therefore we have equality in (1) for any line ℓ through y. Since y ∈ S ′ was an arbitrary point, we have equality for any line ℓ S ′ . This proves Claim 1.
Claim 2:
For any line ℓ To prove this claim, note that for any line ℓ S ′ . Again, fix a point y ∈ S ′ , and add the inequality (2) over all p + 1 lines ℓ of π through y. We get: Since the two extreme terms here are equal, we must have equality throughout this argument. Thus, we have equality in (2) for any line ℓ through y. Since the point y ∈ S ′ was arbitrary, it follows that we have equality in (2) for any line ℓ S ′ . Therefore Lemma 3.8 (b) implies that #(ℓ ∩ ℓ ′ i ) = 1 for 0 ≤ i < k and for any such line ℓ. This proves Claim 2.
Claim 3: S ′ contains exactly k lines of π. To see this, let m be the number of lines ℓ of π such that ℓ ⊆ S ′ . Since S ′ is the union of the sets ℓ ′ i (0 ≤ i < k), and since, by Claim 2, for any two points x = y in ℓ ′ i the line ℓ of π joining x and y is contained in S ′ , it follows that S ′ is the union of m lines of π, as well as the union of k lines of Y ′ . Since p ≥ 2 k > k 2 , Lemma 3.7 implies that m = k. This proves Claim 3. Now, let σ be the incidence system obtained from π by deleting the k lines contained in S ′ and replacing them by the k lines of Y ′ (contained in S ′ ). Since π is a projective plane of order p and Y ′ is p-admissible, Claim 2 implies that σ is p-admissible. Since σ has p 2 + p + 1 lines, Lemma 3.5 implies that σ is also a projective plane of order p. Also, by construction, σ and π share at least p 2 + p + 1 − k ≥ p 2 + p + 1 − log 2 p ≥ p 2 + 1 lines. Therefore, by Lemma 3.3, σ = π. Since ℓ ′ i (0 ≤ i < k) are lines of σ, it follows that they are lines of π.
The main results
We now introduce: Notation: Let Y and X be any two finite incidence systems. Then I(Y, X ) will denote the number of monomorphisms from Y into X . Also, i(Y, X ) will denote the number of isomorphic copies of Y which are subsystems of X .
Lemma 4.1. For any two finite incidence systems Y and X , we have I(Y, X ) = #(Aut(Y)) · i(Y, X ).
Proof: Note that, for any monomorphism f from Y to X , the image Y ′ of Y under f is an isomorphic copy of Y in X , and f may be viewed as an isomorphism from Y to Y ′ . Conversely, for any isomorphic copy Y ′ of Y in X , any isomorphism from Y to Y ′ may be viewed as a monomorphism from Y to X . Therefore, to complete the proof, it suffices to show that, whenever Y and Y ′ are isomorphic finite incidence systems, the number of isomorphisms from Y to Y ′ equals #(Aut(Y)). To see this, fix any isomorphism f from Y to Y ′ , and note that g → f • g is a bijection from the set of all automorphisms of Y onto the set of all isomorphisms from Y to Y ′ .
Notation: For any prime p, let J p denote the set of all multiindices j = (j α : α ∈ F p ) such that |j| := α∈Fp j α = p 2 + p + 1. Also, let X = (X α : α ∈ F p ) be a set of commuting variables.
Theorem 4.2. Let π be a projective plane of prime order p, and let f (X) = j∈Jp a j X j be the complete weight enumerator of C p (π).
(Thus, for j ∈ J p , a j is the number of words of type j in C p (π).) Then, for any partial linear space X with at most log 2 p lines, there are rational numbers α j , j ∈ J p , depending only on X and p, such that i(X , π) = j∈Jp α j a j .
Proof: Let k be the number of lines of X . Thus p ≥ 2 k . Notice that, up to isomorphism, there are only finitely many p-admissible incidence systems Y, with exactly k lines, such that X is a subsystem of Y. Let Y j , 0 ≤ j < m, be mutually non-isomorphic incidence systems such that every such incidence system Y is isomorphic to exactly one Y j .
Note that, for any isomorphic copy X ′ of X in π, there is a unique index j, 0 ≤ j < m, and a unique isomorphic copy Y ′ j in π of the incidence system Y j , such that X ′ is a subsystem of Y ′ j . (Since X ′ is a partial linear space which is a subsystem of π, each line ℓ of X ′ is contained in a unique line ℓ of π. Then Y ′ j must be the unique subsystem of π such that the point set of Y ′ equals that of π, and the lines of Y ′ j are the lines ℓ of π as ℓ varies over the k lines of X ′ .) Therefore we have i(X , π) = 0≤j<m i(X , Y j )i(Y j , π). Hence, to complete the proof, it suffices to show that for each index j, i(Y j , π) can be written as a rational linear combination of the coefficients of f , with coefficients depending only on Y j .
So, fix a p-admissible incidence system Y with k lines, p ≥ 2 k . We have to show that there are rational numbers β i , i ∈ J p , depending only on Y, such that i(Y, π) = i∈Jp β i a i . To see this, take a word w ∈ C p (Y) as defined in Lemma 3.9. Let j ∈ J p be the type of w. Then, for each monomorphism f from Y to π, w • f −1 is one of the a j words of type j in C p (π). Conversely, if w ′ ∈ C p (π) is a word of type j, then, by the proof of Lemma 3.9, each of the j! bijections f (from the point set of Y to the point set of π) satisfying w ′ = w • f −1 is a monomorphism from Y to π. Thus I(Y, π) = j!a j , and hence by As an immediate consequence of Theorem 4.2, we have: Let π, σ be two projective planes of prime order p. Suppose C p (π) and C p (σ) have the same complete weight enumerator. Then, for any partial linear space X with at most log 2 p lines, we have i(X , π) = i(X , σ).
"Theorem of Pappus": Let {x 1 , x 2 , x 3 } and {y 1 , y 2 , y 3 } be two disjoint 3-sets of collinear points of a projective plane π. Suppose {x 1 , x 2 , x 3 } and {y 1 , y 2 , y 3 } determine two distinct lines ℓ 1 , ℓ 2 of π and the six points x i , y i (1 ≤ i ≤ 3) are distinct from the point ℓ 1 ∧ ℓ 2 . Let us put We say that the theorem of Pappus holds in π, or that π is a Pappian, if, for every such choice of six initial points x i , y i (1 ≤ i ≤ 3) in π, the three points z 1 , z 2 , z 3 are collinear in π.
A projective plane need not be Pappian. In fact, a famous theorem in Projective Geometry states (see [5], [9]) that a projective plane π is Pappian iff π is the projective plane over a division ring. Since, by the theorem of Wedderburn, the finite division rings are fields, this implies, in particular, that a finite projective plane π is Pappian iff it is a field plane. Thus, the finite Pappian planes have prime power orders.
It is easy to see that the nine points x i , y i , z i (1 ≤ i ≤ 3) occuring in Pappus' theorem are necessarily distinct. When Pappus' theorem holds, this set of nine points contains the nine collinear triples listed in Table 1 below. (For some initial choices of the six points x i , y i , some or all of the three triples {x i , y i , z i }, 1 ≤ i ≤ 3, may also be collinear. But this does not affect the following arguments.) x 1 x 2 x 3 , y 1 y 2 y 3 , z 1 z 2 z 3 x 1 y 2 z 3 , x 2 y 3 z 1 , x 3 y 1 z 2 x 1 y 3 z 2 , x 2 y 1 z 3 , x 3 y 2 z 1 Consider the partial linear space P (with nine points and nine lines) which is the subsystem of P G(2, F 3 ) obtained as follows. Fix a flag (i.e., an incident point-line pair) (x, ℓ) in P G(2, F 3 ). Then P is the subsystem of P G(2, F 3 ) whose points are the points of P G(2, F 3 ) nonincident with ℓ, and whose lines are the intersections with this point set of the lines of P G(2, F 3 ) non-incident with x. Since the automorphism group of P G(2, F 3 ) is transitive on the flags, this defines the partial linear space P uniquely up to isomorphism.
Note that the nine collinear triples of Table 1, occurring in "the Theorem of Pappus", form an explicit list of the lines of P. This is why P is sometimes called the Configuration of Pappus. ("Configuration" is an old term for a partial linear space.) Also observe that, despite appearances, the validity (or otherwise) of the "Theorem of Pappus" does not depend on the explicit ordering of the six initial points, but it depends only on the bijection x i → y i (1 ≤ i ≤ 3) between the initial collinear tuples {x 1 , x 2 , x 3 } and {y 1 , y 2 , y 3 }. More precisely, if the three indices 1, 2, 3 in the statement are consistently permuted, then the validity (or invalidity) of the hypothesis and conclusion of this "theorem" remains unchanged.
In view of these observations, the theorem of Pappus may be reformulated as follows.
The theorem of Pappus (alternative version): Let's say two 3sets α, β of points in a projective plane π form an admissible pair if (i) α and β are collinear triples, (ii) α and β are disjoint, and (iii) no four points in α ⊔ β are collinear in π. Then π is said to satisfy "the theorem of Pappus" (or π is Pappian) if, for every pair (α, β) of admissible triples of π and every bijection f : α → β, there is a unique isomorphic copy of P in π such that (a) α and β are lines of P, (b) for each x ∈ α, x and f (x) are non-collinear in P.
Finally, note that the points and lines of P are uniquely determined by the triple (α, β, f ) as above. Namely, the nine points and eight of the lines of P are determined by the hypothesis, and the ninth line of P is determined by the conclusion of Pappus' Theorem, given (α, β, f ).
Therefore, the characterization of finite field planes as the finite Pappian projective planes may be rephrased as follows.
Theorem 4.4. Let π be a projective plane of order n. Then i(P, π) ≤ Using Theorem 4.4 and Corollary 4.3 with X = P, σ = P G(2, F p ), we get (as P has nine lines): Theorem 4.5. Let π be a projective plane of prime order p such that π has the same complete weight enumerator (of its p-ary code) as P G(2, F p ). If p > 2 9 , then π is isomorphic to P G(2, F p ).
Recall that a projective plane π is said to be Desarguesian if (in the standard terminology of projective geometry) each pair of triangles in π which is centrally perspective is also axially perspective. Consider the Petersen graph, which may be described as the graph whose vertices are the 5 2 unordered pairs of symbols from a set of five symbols, with disjointness as adjacency. Let D be the partial linear space whose points and lines are both indexed by the vertices of the Petersen graph, such that the line indexed by x is incident with the point indexed by y iff x and y are adjacent vertices of the graph. D is known as the Configuation of Desargue since it stands in the same relation with "Desargue's theorem" as P with the theorem of Pappus". Therefore, the well-known theorem ( [5], [9]) that a projective plane is a field plane iff it is desarguesian may be rephrased as in Theorem 4.4 in the finite case. Namely, for every projective plane π of order n, one may write down an upper bound for i(D, π) in terms of n alone, which is attained iff π is a field plane. Using this theorem, one can write an alternative proof of Theorem 4.5. However, since D has ten lines, this alternative proof works only for p > 2 10 . We have chosen to work with P since it has fewer lines.
Speculations
The bound log 2 p in Theorem 4.2 is perhaps the best possible. However, we expect that the bound p > 2 9 in Theorem 4.5 is unnecessary, and this theorem actually holds for all primes p. For instance, if the conjecture of Neumann (briefly mentioned in the introduction) is correct, then we can use P G(2, F 2 ) instead of P in the proof of Theorem 4.5, pushing its bound to p > 2 7 . In any case, Theorem 4.5 shows that, in order to prove the uniqueness conjecture 1.2, at least for large primes p, it suffices to calculate the complete weight enumerator for arbitrary projective planes of order p. But this is a tall order! We do not even know the complete weight enumerator of P G(2, F p ) for any prime p ≥ 7.
If this conjecture is correct, then, of course, to prove Conjecture 1.2 it will suffice to investigate the initial segment of the Hamming weight enumerator of the dual p-ary code of arbitrary projective planes of order p. | 2018-01-22T11:02:44.000Z | 2018-01-22T00:00:00.000 | {
"year": 2018,
"sha1": "2b67eac09d9701fe33fc56f496c08b3b150fe572",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.07038",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b67eac09d9701fe33fc56f496c08b3b150fe572",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
256182465 | pes2o/s2orc | v3-fos-license | A Low-Molecular-Weight BDNF Mimetic, Dipeptide GSB-214, Prevents Memory Impairment in Rat Models of Alzheimer’s Disease
Brain-derived neurotrophic factor (BDNF) is known to be involved in the pathogenesis of Alzheimer’s disease (AD). However, the pharmacological use of full-length neurotrophin is limited, because of its macromolecular protein nature. A dimeric dipeptide mimetic of the BDNF loop 1, bis-(N-monosuccinyl-L-methionyl-L-serine) heptamethylene diamide (GSB-214), was designed at the Zakusov Research Institute of Pharmacology. GSB-214 activates TrkB, PI3K/AKT, and PLC-γ1 in vitro. GSB-214 exhibited a neuroprotective activity during middle cerebral artery occlusion in rats when administered intraperitoneally (i.p.) at a dose of 0.1 mg/kg and improved memory in the novel object recognition test (0.1 and 1.0 mg/kg, i.p.). In the present study, we investigated the effects of GSB-214 on memory in the scopolamine- and steptozotocin-induced AD models, with reference to activation of TrkB receptors. AD was modeled in rats using a chronic i.p. scopolamine injection or a single streptozotocin injection into the cerebral ventricles. GSB-214 was administered within 10 days after the exposure to scopolamine at doses of 0.05, 0.1, and 1 mg/kg (i.p.) or within 14 days after the exposure to streptozotocin at a dose of 0.1 mg/kg (i.p.). The effect of the dipeptide was evaluated in the novel object recognition test; K252A, a selective inhibitor of tyrosine kinase receptors, was used to reveal a dependence between the mnemotropic action and Trk receptors. GSB-214 at doses of 0.05 and 0.1 mg/kg statistically significantly prevented scopolamine-induced long-term memory impairment, while not affecting short-term memory. In the streptozotocin-induced model, GSB-214 completely eliminated the impairment of short-term memory. No mnemotropic effect of GSB-214 was registered when Trk receptors were inhibited by K252A.
INTRODUCTION
Alzheimer's disease (AD) is the most common cause of dementia, accounting for 60-80% of all dementia cases, while no effective pathogenetic therapy exists today for this disease [1].
Over the past two decades, regulation of the activity of neurotrophin receptors, and the brain-derived neurotrophic factor (BDNF) in particular, has been viewed as a new strategy for treating neurodegenerative diseases. BDNF maintains neuronal viability and synaptic plasticity, playing an important role in the processes of learning and memory. Data indicative of BDNF involvement in the pathogenesis of AD have been published [2][3][4]. Reduced BDNF expression is already observed at the early stage of the disease and correlates with an accumulation of β-amyloid and the hyperphosphorylated tau protein [5]. The favorable effects of exogenous BDNF have been demonstrated in various AD models. BDNF ensures neuronal protection under conditions of β-amyloid toxicity both in vitro and in vivo [6]. Insertion of the BDNF gene within a lentiviral vector into J20 transgenic mice (carrying mutations in the gene encoding the amyloid precursor protein) prevented the death of the cells of the entorhinal cortex and improved cognitive functions [7]. It has been shown using another genetic model of AD (P301L mice carrying the mutant tau protein gene) that stable human BDNF gene expression restored the BDNF level, thus preventing neuronal and synaptic degeneration in the hippocampus, as well as cognitive disorders [8]. However, the gene therapy has such shortcomings as invasiveness, high cost, and the risk of adverse effects related to the pleiotropic effect of BDNF.
The clinical use of BDNF is impeded by its poor penetration through the blood-brain barrier and rapid degradation [9]. Low-molecular-weight BDNF mimetics with improved pharmacokinetic properties are currently being developed [10,11]. Activity of the low-molecular-weight BDNF mimetic 7,8-dihydroxyflavone, a TrkB receptor agonist, was determined using AD models [12][13][14].
A dimeric dipeptide mimetic of the BDNF loop 1, GSB-214 (bis-(N-monosuccinyl-L-methionyl-L-serine) heptamethylene diamide), was designed and synthesized at the Zakusov Research Institute of Pharmacology based on the hypothesis that the most exposed domains of the loop-like neurotrophin structures (most frequently, the central domains of their β turns) exhibit pharmacophoric properties [15] (Fig. 1).
Earlier, Western blotting showed that incubation of HT-22 mouse hippocampal cells in the presence of GSB-214 for 5-180 min results in the activation of TrkB receptors and the conjugated PI3K/Akt and PLC-γ1 signaling pathways, but not the MAPK/ERK signaling pathway [10]. It has been shown using HT-22 cells that GSB-214 at micro-nanomolar concentrations exhibits neuroprotective activity under oxidative stress [15].
The dipeptide GSB-214 (administered i.p. at doses of 0.1-0.5 mg/kg) exhibited in vivo neuroprotective activity in a rat model of transient middle cerebral artery occlusion [16] and antidiabetic activity in a streptozotocin-induced model of diabetes in mice [17]. Taking into account the findings regarding the similarity of the pathogenesis of diabetes and AD [18], the antidiabetic properties of GSB-214, along with the neuroprotective properties, indicate that there is promise in studying the effects of the dipeptide in AD models.
The objective of our work was to investigate the effect of GSB-214 on memory in the scopolamineand streptozotocin-induced models of AD, as well as evaluate its mnemotropic activity as a function of the activation of Trk receptors.
Animals
The experiments were conducted using male Wistar rats (weight, 230-260 g) procured from the Andreevka Branch of the Research Center for Biomedical Technologies, the Federal Medical-Biological Agency (FMBA). The animals were kept in a vivarium with ad libitum feeding and access to water and natural light-dark cycle. The behavioral experiments were carried out at a time interval between 10 a.m. and 2 p.m.
The scheme of the experiment is shown in Fig. 2.
Streptozotocin-induced model of AD
The rats were randomly assigned to the following groups: Control (n = 10), Streptozotocin (STZ) (n = 7), and STZ + GSB-214 (0.1 mg/kg) (n = 8). STZ in citrate buffer was stereotactically injected into the cerebral ventricles at a dose of 3 mg/kg (AP = −1.0; L = 1.5; depth, 3.5). The injection volume was 3 μL per ventricle; the injection rate was 1 μL/min. One hour after the exposure, the rats received an i.p. injection of GSB-214 (0.1 mg/kg) and, then, received injections once daily during 13 days. The rats in the Control group were injected with equivalent volumes of citrate buffer, instead of STZ, and distilled water, instead of GSB-214, according to the same scheme. The rats in the STZ group received STZ and distilled water.
The novel object recognition test was carried out on days 19-20. The scheme of the experiment is shown in Fig. 3.
The novel object recognition test
This test is based on the natural rodents' instinct to investigate novel objects [19]. It is widely used for assessing both short-term and long-term memory [20].
The test was conducted in T4 cages identical to the home cages where the animals had been housed throughout the study. A rat was first placed into an empty cage with the floor covered with sawdust for 4 min to adapt.
The familiarization phase. Two identical objects not familiar to the rat were placed in the two nearest corners of the cage. The time spent exploring the objects was recorded during 4 min. The rat was then returned to its home cage.
Test.
A new pair of objects was placed in the same corners of the cage; one object was identical to those presented to the rats during the familiarization phase, while the other was unfamiliar. The time spent exploring the familiar and novel objects was recorded during 4 min. The test was carried out 1 h (test 1) and 24 h (test 2) after the familiarization phase to record the short-term and long-term memory, respectively. Different unfamiliar objects were used in test 1 and test 2. Exploration was defined as sniffing, with the distance between the animal's snout and the object being ≤ 2 cm.
The discrimination index was used as the memory criterion [21]; it was calculated using the formula: DI = (T novel -T fam )/(T novel + T fam ), where T novel was the time spent exploring a novel object and T fam was the time spent exploring a familiar object. The K D values > 0 meant that the animal remembered the object presented to it at the familiarization phase.
Pharmacological inhibitory analysis
The rats were randomly assigned to the following groups: Control (distilled water and 1% DMSO in normal saline, n = 12), GSB-214 0.1 mg/kg (GSB-214 The dose of GSB-214 was chosen based on earlier experiments [22].
Statistical analysis
Statistical analysis of the experimental data was performed using the GraphPad Prism 8.0 software (GraphPad Software, USA). The statistical significance of differences in the discrimination index was assessed using one-way ANOVA, followed by pairwise intergroup comparisons using the Dunnett's test or two-factor ANOVA followed by pairwise intergroup comparisons using the Tukey's test.
The data were presented as the mean ± standard error of the mean. Differences were considered statistically significant at p < 0.05.
The dipeptide GSB-214 prevents longterm memory impairment in the scopolamine-induced model of AD
Compared to the control group, chronic administration of scopolamine significantly reduced the discrimination index in both test 1 (1 h after becoming familiar with the objects, p = 0.0212) and test 2 (24 h after becoming familiar with the objects, p = 0.0077), thus indicating that short-term and long-term memory, respectively, was impaired ( Table 1). Chronic administration of GSB-214 at doses of 0.05 and 0.1 mg/kg prevented long-term memory impairment (p = 0.0177 and 0.0304 vs. SC group, respectively), although it had no effect on short-term memory. No activity was observed for the dipeptide GSB-214 when administered at a dose of 1.0 mg/kg (Table 1).
Hence, GSB-214 i.p. administered at doses of 0.05 and 0.1 mg/kg for 10 days proved effective against long-term memory impairment in the scopolamineinduced model of AD.
The dipeptide GSB-214 prevents short-term memory impairment in a streptozotocininduced model of AD
In the streptozotocin-induced model of AD, we uncovered significant memory impairment in the rats in the STZ group 1 h after becoming familiar with the objects (р = 0.0045), but not after 24 h ( Table 2). Therefore, in this experimentally induced model of AD, rats experienced short-term, rather than longterm, memory impairment, which is typical of the early stage of the disease [23]. GSB-214 at a dose of 0.1 mg/kg yielded a statistically significant correction of this impairment (р = 0.0032); the discrimination index in the group of animals receiving treatment was 4.8-fold higher compared to that in the STZ group ( Table 2).
Hence, the dipeptide GSB-214 completely inhibited short-term memory impairment in the streptozotocininduced model of AD.
The mnemotropic activity of GSB-214 depends on the activation of Trk receptors
In order to confirm the involvement of the activation of Trk receptors in the mnemotropic effects of GSB-214, we studied how K252A, an inhibitor of these receptors, influences the effects of GSB-214 in the novel object recognition test. Table 3 shows that the dipeptide GSB-214 significantly improved longterm memory as the discrimination index in the test after 24 h in this case increased approximately 1.5fold compared to that in the control group. This effect was completely eliminated by injecting a K252A inhibitor 20 min before the exposure to GSB-214. K252A per se did not affect the rats' memory. The studied compounds were found to exhibit no effect on the short-term memory of the rats (test 1) ( Table 3).
DISCUSSION
Earlier, we had found that a single-dose BDNF dipeptide mimetic GSB-214 administered i.p. (0.1 and 1.0 mg/kg) had a favorable effect on the long-term memory of rats in the novel object recognition test [22].
In this study, we investigated the mnemotropic activity of GSB-214 in the same test in the scopolamineand streptozotocin-induced models of AD.
The scopolamine-induced amnesia model is commonly used for evaluating potential therapeutic agents for treating AD [24][25][26]. Chronic exposure to scopolamine causes cholinergic deficit that is mainly induced by blockade of acetylcholine receptors and, therefore, cognitive impairment [25]. In our modification of the model [24], the impairment induced by chronic exposure to scopolamine and its subsequent discontinuation (see the scheme of the experiment in Fig. 2) is attributed to the activation of feedback mechanisms, which first increase the density and affinity of acetylcholine receptors and subsequently induce the cholinergic deficit due to accelerated binding of the "available" acetylcholine.
The model of AD induced by intracerebroventricular injection of streptozotocin is also com-monly used, has been validated, and studied well [27,28]. Streptozotocin, a diabetogenic toxin, enters cells by binding to glucose transporter 2, because it is structurally similar to a sucrose molecule [28]. Intracerebral administration of streptozotocin induces insulin resistance and impairs brain glucose metabolism [29]. It causes neuropathological symptoms typical of AD, such as accumulation of β-amyloid and hyperphosphorylated tau protein, oxidative stress, as well as neuronal and synaptic death [30][31][32][33]. Like the scopolamine-induced model of AD, the streptozotocininduced model is associated with memory disorders [31,33].
We have revealed short-term and long-term memory impairment in the scopolamine-induced model of AD, which is consistent with the published data [26,34]. The dipeptide GSB-214 eliminated only longterm memory impairment, while having no effect on short-term memory. This finding agrees with our earlier data obtained under physiological conditions in the novel object recognition test [22]. We assume that the revealed effect of GSB-214 can be attributed to the activation of the PI3K/Akt post-receptor signaling pathway, which was demonstrated earlier in in vitro experiments [10]. Serine/threonine protein kinase mTOR, one of the major protein synthesis regulators, is a component of the PI3K/Akt pathway [35]; it is viewed as the key factor in memory consolidation and, therefore, long-term memory formation [36]. It was found, using the novel object recognition test, that mTOR inhibition impairs long-term memory, but not short-term memory , in rats [37]. A hypothesis can be put forward that the effects of GSB-214 in the scopolamine-induced model of AD are related to the improvement of memory consolidation via the activation of the TrkB/PI3K/Akt/mTOR signaling pathway. We have demonstrated by pharmacological inhibitory analysis that the mnemotropic activity of GSB-214 is caused by an activation of the Trk neurotrophin receptors with which the PI3K/Akt/mTOR signaling pathway is associated.
In the streptozotocin-induced model, we observed only short-term memory impairment, which can be indicative of relatively mild neurodegenerative changes being characteristic of early AD [38]. GSB-214 eliminated this impairment. Since no effect of GSB-214 on short-term memory under physiological conditions was observed previously [22], it is fair to assume that memory was recovered due to the increase in neuronal viability under the exposure to streptozotocin-induced toxicity. The neuroprotective effects of GSB-214 were revealed earlier in in vitro experiments [15], as well as in a rat model of ischemic stroke induced by transient middle cerebral artery oc- The data are presented as the mean ± standard error of the mean. ** p < 0.01, * p < 0.05 compared to the Control group; # p < 0.05 compared to the SC group (one-way ANOVA, the Dunnett's test). The data are presented as the mean ± standard error of the mean. ** p < 0.01 compared to the Control group; ## p < 0.01 compared to the STZ group (one-way ANOVA, the Dunnett's test). The data are presented as the mean ± standard error of the mean. *** p < 0.001 compared to the Control group; #### p < 0.0001 compared to the GSB-214 group (twoway ANOVA, the Tukey's test).
clusion [16]. These effects, like the mnemotropic ones, are presumably associated with the activation of the PI3K/Akt signaling pathway. This pathway is known to mediate neuroprotection by inhibiting pro-apoptotic proteins and increasing the expression of antiapoptotic proteins [39]. PI3K/Akt was shown to mediate a reduction of the activity of glycogen synthase kinase 3β (GSK-3β), which is involved in increased β-amyloid production and hyperphosphorylation of the tau protein [40].
Interestingly, the previously revealed antidiabetic activity of GSB-214 proved dependent on the activation of the PI3K/Akt pathway, as shown by a pharmacological inhibitory analysis [17]. Since it is well-known that AD and diabetes mellitus have a similar pathogenesis [18], this fact supports the idea that the PI3K/Akt pathway also contributes to the effects of GSB-214 in a streptozotocin-induced model reproducing all the major pathophysiological mechanisms of AD. Figure 4 shows the putative mechanisms of action of GSB-214 in AD models. Additional studies are needed to identify the exact mechanisms of action of GSB-214 in an experimentally induced model of AD.
Activation of the PI3K/AKT signaling pathway by the dipeptide GSB-214, which had previously been identified in in vitro experiments [10], may promote neuroprotection by inhibiting pro-apoptotic proteins and activating anti-apoptotic proteins, as well as improve memory consolidation and, therefore, long-term memory through the activation of the regulator of mTOR protein synthesis. | 2023-01-24T16:16:05.677Z | 2023-01-20T00:00:00.000 | {
"year": 2022,
"sha1": "70ae784f70f0be44c07db6bf3b252efb1c28642b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6fb54d93c1328b65a06a25bf5587e3959651838e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12862212 | pes2o/s2orc | v3-fos-license | Slope walking causes short-term changes in soleus H-reflex excitability
The purpose of this study was to test the hypothesis that downslope treadmill walking decreases spinal excitability. Soleus H-reflexes were measured in sixteen adults on 3 days. Measurements were taken before and twice after 20 min of treadmill walking at 2.5 mph (starting at 10 and 45 min post). Participants walked on a different slope each day [level (Lv), upslope (Us) or downslope (Ds)]. The tibial nerve was electrically stimulated with a range of intensities to construct the M-response and H-reflex curves. Maximum evoked responses (Hmax and Mmax) and slopes of the ascending limbs (Hslp and Mslp) of the curves were evaluated. Rate-dependent depression (RDD) was measured as the % depression of the H-reflex when measured at a rate of 1.0 Hz versus 0.1 Hz. Heart rate (HR), blood pressure (BP), and ratings of perceived exertion (RPE) were measured during walking. Ds and Lv walking reduced the Hmax/Mmax ratio (P = 0.001 & P = 0.02), although the reduction was larger for Ds walking (29.3 ± 6.2% vs. 6.8 ± 5.2%, P = 0.02). The reduction associated with Ds walking was correlated with physical activity level as measured via questionnaire (r = −0.52, P = 0.04). Us walking caused an increase in the Hslp/Mslp ratio (P = 0.03) and a decrease in RDD (P = 0.04). These changes recovered by 45 min. Exercise HR and BP were highest during Us walking. RPE was greater during Ds and Us walking compared to Lv walking, but did not exceed “Fairly light” for Ds walking. In conclusion, in healthy adults treadmill walking has a short-term effect on soleus H-reflex excitability that is determined by the slope of the treadmill surface.
Introduction
The spinal cord is a major locus of activity-dependent neural plasticity associated with motor learning and skilled performance improvements (Windhorst 1996;Wolpaw and Tennissen 2001). Activity-dependent spinal plasticity can be evoked in the short-term and the longterm, and spinal excitability reflects the regular patterns of motor activity in which people engage (Zehr 2002). For example, soleus H-reflexes are smaller in explosively trained athletes (Casabona et al. 1990) and skilled ballet dancers (Nielsen et al. 1993), and larger in endurance trained athletes, compared to other athletes and to untrained controls (Casabona et al. 1990;Maffiuletti et al. 2001). Walking is a ubiquitous and fundamental rhythmic activity that supports independence and quality of life (Yildiz 2012) and also has been found to evoke spinal plasticity. For example, a recent study reported that 30 min of level treadmill walking causes short-term H-reflex depression in healthy adults (Thompson et al. 2006). Another study found that 20 min of over-ground walking caused an increase in rate-dependent depression of H-reflexes, a form of presynaptic inhibition (Phadke et al. 2009). However, it is not known if the response of the H-reflex pathway could be augmented by changing the parameters of the walking task, for example, surface slope.
During downslope (Ds), upslope (Us), and level (Lv) walking lower extremity muscles express unique patterns of electromyographic (EMG) activity that reflect differences in motor output and afferent feedback with different slopes (Akima et al. 2005;Gregor et al. 2006;Lay et al. 2007). For example, an increase in extensor muscle EMG activity during Us walking, and a decrease during Ds walking, has been reported for both quadrupeds (Gregor et al. 2006;Sabatier et al. 2011) and humans (Lay et al. 2007;Franz and Kram 2012). Moreover, during Ds walking there is increased muscle length-dependent afferent feedback compared to Lv or Us walking due to increased reliance on eccentric muscle contractions (Kuster et al. 1995;Abelew et al. 2000;Gregor et al. 2001;McIntosh et al. 2006). Downslope walking might also require more cortical activity than either Lv or Us walking due to this reliance on eccentric muscle contractions (Fang et al. 2001(Fang et al. , 2004. This might cause an increase in cortical mediated spinal inhibition that would manifest as a reduction in spinal excitability (Wolpaw 2007). Another way Ds walking could affect reduced spinal excitability is through activation of the Ia spinal reflex arc. For example, activation of the Ia spinal reflex arc through stimulation of muscle spindles by vibration results in decreased spinal excitability (Shinohara 2005). Thus, Ds walking, through a more natural form of persistent activation of the spinal Ia reflex arc, may have a similar effect. Finally, Ds walking is less metabolically demanding and evokes a smaller cardiovascular response than level or Us walking (Knuttgen et al. 1971;Navalta et al. 2004). Therefore, as a potential adjunct for exercise prescription, Ds walking could be more accommodating for people with reduced exercise capacity.
It is also possible that Us walking might have a unique ability to evoke spinal plasticity. During Us walking the body's center of mass is moved vertically, against gravity, and there is increased motor unit recruitment in lower extremity muscles compared to Lv and Ds walking (Lay et al. 2007;Franz and Kram 2012). Increased effort required for Us walking would be expected to promote increased serotonergic and noradrenergic signaling (Aston-Jones et al. 2000;Jacobs et al. 2002), potentially leading to a change in spinal excitability. An increase in spinal excitability would facilitate muscle stiffness and force transmission in the lower extremity extensor muscles to serve the Us walking movement pattern. At this time, whether or not walking slope modulates the effect of walking on spinal synaptic transmission remains unknown.
The purpose of this study was to test the hypothesis that Ds walking evokes a decrease in soleus H-reflex excitability. This study also hypothesized an increase in soleus H-reflex excitability as a result of Us walking. Soleus H-reflexes were measured, whereas participants were resting in the recumbent seated position both before and after treadmill slope walking.
Participants
Sixteen participants (nine men and seven women) between the ages of 23 and 44, who did not participate in competitive sports, and with no neurologic disease or injury were tested on 3 days within a period of 1 week. Participant characteristics were as follows (mean AE SD): age: 27.3 AE 6.0 years; height: 177.8 AE 10.5 cm; mass: 76.3 AE 15.6 kg; BMI: 23.9 AE 3.4 kg/m 2 . All participants were tested before and after treadmill walking. However, four of these participants were tested for only the pre and the first postwalking time point. Each visit started within the same 1-h window of time to avoid the effects of diurnal variation in reflex size (Wolpaw and Seegal 1982;Lagerquist et al. 2006;Thompson et al. 2013). All participants were familiar with treadmill walking. The right leg was tested in all but one participant, who asked for the left leg to be tested. Written informed consent was obtained from all participants, and the study was approved by the Institutional Review Board of Emory University.
Procedures
Physical activity (PA) during the week prior to testing was assessed with the 7-day PA recall questionnaire (Blair et al. 1985;Motl et al. 2003b;Cureton et al. 2009). There was a range of values for PA (40-425 kJ/day), but the group average (212 AE 113 kJ/day) was similar to healthy college-aged nonathletes as reported in other studies (Motl et al. 2003b;Cureton et al. 2009). Participants walked at a slow speed, 4.0 km/h (2.5 mph), on a Sole Fitness F85 Folding Treadmill (Niagra Falls, ON) downslope (À8.5°, À15%), level (0%), and upslope (2.9À8.5°, i.e., 5-15%), for 20 min. Duration was selected based on previous reports of altered H-reflex responses with other activities (treadmill running (Bulbulian and Darabos 1986), cycle ergometry (Motl et al. 2006)). For Us walking the treadmill slope changed every 5 min. The order of slopes was 2.9°, 8.5°, 5.7°, and 2.9°. This pattern was used to control for the much larger increase in effort that is associated with Us walking, and to ensure that all participants could maintain a full 20 min of uninterrupted Us walking (Navalta et al. 2004).
A different walking condition was tested for each test session. Upslope and Lv walking were randomized to days 1 and 2. Downslope walking was always reserved for day 3 because of the potential for delayed onset muscle soreness (Whitehead et al. 2001;Farr et al. 2002;Nottle and Nosaka 2005) that could have then affected walking and/ or H-reflexes on the other 2 days of testing (Vangsgaard et al. 2013). None of our participants had ever engaged in uninterrupted Ds treadmill walking as used in this study. Therefore, to acclimate participants to the Ds walking task, each subject walked Ds for 3 min at the end of the day-2 session. We have previously found that this duration of Ds walking does not cause exercise-induced muscle injury or delayed onset muscle soreness (Sabatier and Black 2012). Prior to walking a wireless tri-axial accelerometer was strapped securely to the right ankle (BN-ACCL3; Biopac Systems Inc., Goleta, CA). Stride frequency was computed online for each step using the time period separating consecutive accelerations resulting from foot contact with the treadmill.
During the 5 th , 10 th , 15 th , and 20 th min of walking heart rate was measured using a Polar Ft1 Heart Rate Monitor (Polar Electro Inc.; Lake Success, NY). Arterial pressure was measured at the brachial artery by manual sphygmomanometry using an appropriate-sized cuff (Perloff et al. 1993). Participants were also asked to rate their perceived exertion using the Borg scale, with 6 being "very, very light" to 19 being "very, very hard" (Borg 1978). After walking participants were immediately seated and prepared for post H-reflex testing. Heart rate and blood pressure were measured once more 5 min after walking stopped. H-reflex collection began again at 10 min (n = 16), and then again at 45 min (n = 12).
Soleus H-reflexes
The H-reflex was measured at the soleus while the participant was seated in a semireclined position with the hip at 120°and the knee at 30°. The soleus was chosen because it is strongly modulated during slope walking (Lay et al. 2007). A band was secured around the legs at the distal thigh to prevent the legs from falling into external rotation or abduction. Electromyographic (EMG) activity was recorded from the soleus using bipolar electrodes (EL503; Biopac Systems Inc.) placed 2 cm apart along the posterior lateral aspect of the muscle, 2 cm inferior to the lower border of the lateral gastrocnemius (Basmajian and Blumenstein 1989). Identical day-to-day EMG and stimulation electrode placement was ensured by outlining electrode locations on the skin with a permanent marker. EMG signals were band-pass filtered (5-1000 Hz) and amplified by 2000 (BN-EMG2; Biopac Systems Inc.). Low impedance (<10 kO) was verified for all stimulating and recording electrodes using an electrode impedance meter (UFI MkIII Checktrode, Model 1089).
Reflexes were evoked by stimulating the tibial nerve in the popliteal fossa through a monopolar electrode (round, 2.5 cm) with the anode (square, 5 cm) placed above the patella. Both were self-adhering carbon rubber TENS/ NMES electrodes (Medical Products Online, Danbury, CT). Cathode placement was determined prior to testing using a pen electrode (Model G.MPPE, Digitimer, Hertfordshire, AL7 3BE, England) to find the location yielding an H-reflex without an M-response and plantar flexion without eversion or inversion. Single, 1-msec rectangular pulses were delivered at pseudo-random intervals (5-8 sec in duration) using a constant-current electrical stimulator (STMISOLA; BIOPAC Systems Inc.) controlled with custom-written scripts in AcqKnowledge software (Biopac Systems Inc.).
The amplitude of the H-reflex depends on motoneuron pool excitability (as measured via background EMG activity) when the H-reflex is elicited. Therefore, when evaluating H-reflexes it is important to standardize for the level of motoneuron pool excitability (Schieppati 1987;Burke et al. 1989;Zehr 2002). In this study soleus background EMG activity was maintained at a constant low level as measured from participants during quiet standing ). The ankle was maintained at 10 o of plantar flexion using a foot brace that the subject contracted against to adjust soleus EMG activity to the required level. Prior to H-reflex testing participants stood quietly for 30 seconds while soleus and tibialis anterior EMG activity were measured. The average rectified soleus EMG activity was recorded. During collection of H-reflex recruitment curves participants were provided with visual feedback to maintain average rectified soleus EMG activity at standing level.
Spinal excitability
The H-reflex recruitment curve was acquired by progressively increasing the intensity of electrical stimulation in 0.5-1.0 mA increments to find the largest obtainable H-reflexes and M-responses, measured as the peak-to-peak amplitude of the raw EMG signal. The H max was taken as the average of the largest three H-reflexes, and the M max as the average of the largest three M-responses.
The M-response increases with increasing stimulus intensity, eventually reaching a plateau (referred to as M max subsequently). M max is an estimate of the motoneuron pool. Longitudinal studies generally express H max as a percentage of the M max to account for potential changes in ability to deliver current to the peripheral nerve across time and for differences in muscle geometry across subjects (Crone et al. 1999;Zehr 2002;Palmieri et al. 2004). This is also done to control for intersubject differences in the efficacy of nerve stimulation and total number of MUs accessible by nerve stimulation (Palmieri et al. 2004). Therefore, one expression of spinal excitability used in this study is the H max /M max ratio.
A drawback to characterizing spinal excitability in the manner described above is that stimulating above motor threshold results in collision between orthodromic and antidromic impulse transmission. This may contribute to inter and intrasubject variability in H max , and to the decreased sensitivity of the H-reflex facilitation and inhibition at H max (Crone et al. 1990). The rising slope of the H recruitment curve (Hslp) has been suggested as a good alternative because it is free of this collision effect (Funase et al. 1994). Also, in previous studies, an increase in spinal excitability was detected using the Hslp/Mslp approach, but not with the H max /M max ratio (Kalmar and Cafarelli 1999;Walton et al. 2003). Therefore, in order to optimize this study's potential to detect an increase in spinal excitability the slope of the H-reflex recruitment curve was also evaluated. Recruitment curves were fitted with 8 th order polynomial transformations starting at the first response of the ascending limb using custom-written scripts in Microsoft Excel. R-squared values for raw data versus computed curves were high (mean AESE, 0.96 AE 0.01). The ratio of the Hslp to the M recruitment curve slope (Mslp) was computed to standardize for motoneuron excitability. This renders the Hslp/Mslp ratio a metric of spinal excitability. Each slope was derived from the linear regression line that included values from the transformed curve between 25% and 75% of the H max and M max , for the Hslp and Mslp, respectively.
Rate-dependent depression (RDD)
Rate-dependent depression (also known as postactivation depression) was measured as a segmental presynaptic mechanism of activity-dependent synaptic efficacy in the H-reflex pathway (Kohn et al. 1997;Hultborn and Nielsen 1998;Aymard et al. 2000). Ten H-reflexes were elicited at a low stimulus frequency of 0.1 Hz, and at the stimulus intensity that elicited an H-reflex that was between 20% of M max and 50% of H max (Sosnoff and Motl 2010). The H-reflex was elicited 11 times at the same stimulus intensity, but with a stimulus frequency of 1 Hz (i.e., high-frequency stimulation), and the last 10 H-reflexes of this series were averaged. The average amplitude of these H-reflexes is expressed as a percentage of the average H-reflex amplitude when evoked at a frequency of 0.1 Hz (Sosnoff and Motl 2010). Voluntary muscle contraction decreases RDD (Hultborn and Nielsen 1998). Therefore, participants were instructed to keep the leg at complete rest throughout RDD testing (Meunier et al. 2007;Lamy et al. 2009).
Muscle soreness
Downslope walking is associated with more eccentric muscle contractile activity than Lv or Us walking (Abelew et al. 2000;Gregor et al. 2001;Akima et al. 2005).
Because eccentric muscle contractions are associated with increased risk for delayed onset muscle soreness (DOMS) , Ds walking might cause DOMS. In fact, previous studies have used a variety of Ds locomotion patterns as a way to induce muscle soreness (Whitehead et al. 2001;Farr et al. 2002;Nottle and Nosaka 2005). Although no previous study to our knowledge has used the Ds walking pattern described in this study, we anticipated muscle soreness might result in lower extremity muscles after Ds walking. Therefore, muscle pain intensity in the lower extremities (anterior and posterior leg, anterior thigh) was assessed using a visual analogue scale (VAS). The VAS consists of a 10-cm line ranging from 0 (no pain) to 10 (worst pain imaginable). Participants rated pain intensity felt during daily life activity during the 4 days after the last session.
Statistical analysis
This study used a repeated measures design with outcomes collected on all 3 days. The repeated measures design was modeled using a general linear model with slope as a fixed effect and random effects for subject nested within time point using R version 3.1.0 (The R Foundation for Statistical Computing) (Bates et al. 2014). If the interaction term for the model was significant, each post time point was compared to the pre time point using a model adjusted t-test where the standard errors and degrees of freedom were estimated from the model with restricted maximum likelihood methodology. Intraclass correlation coefficients were computed using Statistica data analysis software system (version 10, StatSoft, Inc., 2013) to evaluate day-to-day reliability of dependent measures. Linear correlation analysis was carried out to determine the relationship between variables of interest.
The T-distribution was used to determine the statistical significance of correlations. The significance level was set at P ≤ 0.05 for all statistical tests.
Results
Study participants were healthy adults who did not participate in competitive sports. Nevertheless, we anticipated regular PA levels might vary significantly and contribute to variability in starting values of our dependent measures, or the way these values responded to slope walking. Therefore, this study also measured PA via questionnaire to determine if there are relationships between regular PA and our dependent measures. There was a significant negative correlation between the change in H max /M max with Ds walking and PA (r = À0.52, P = 0.04, Fig. 1). Therefore, participants who reported more PA had a larger reduction in H max /M max after Ds walking. There was no During treadmill walking there were several sloperelated changes in measures associated with effort. Effort during slope walking was monitored by measuring blood pressure (BP), heart rate (HR) and ratings of perceived exertion (RPEs) ( Table 1). Heart rate was highest during Us walking, and not significantly different between Lv and Ds walking. An exception was after treadmill walking. Five minutes after cessation of both Ds and Us walking HR remained elevated compared to after Lv walking. Heart rate increased during the Ds walking bout and was highest during the 3 rd and 4 th epochs. Like HR, systolic blood pressure (SBP) was also highest during Us walking. However, SBP was similar between Lv and Ds walking. Diastolic blood pressure (DBP) was higher during Ds walking than during both Lv and Us walking. However, DBP fell to a lower level at 5 min after cessation of Ds walking compared to either Lv or Us walking. Ratings of perceived exertion were higher for both Ds and Us walking compared to Lv walking. For Ds walking RPE did not exceed 10 (9 was anchored as "Very light," and 11 was "Fairly light"). Thus, although there was increased perceived effort associated with walking on either slope, it remaind light for Ds walking. Step cycle timing Stride frequency was measured to determine if basic step cycle timing for walking at different slopes in this study is consistent with previous reports on the biomechanics of slope walking. Stride frequency was significantly greater for Ds walking (1.04 AE 0.07 Hz) compared to Lv (0.85 AE 0.02 Hz) and Us walking (0.83 AE 0.07 Hz), P ≤ 0.05 for all comparisons. These results are consistent with previous investigations of the biomechanics of walking on different slopes (Hunter et al. 2010;Kram 2012, 2013). Thus, although comprehensive biomechanical measures of lower body movements were not made here, the results of step cycle timing, effort and DOMS support the idea that this study compared well to the studies referenced above.
Lower extremity muscle soreness
There were no reports of soreness or other discomfort after either Lv or Us walking. Although five participants reported no soreness, all other participants reported soreness ranging from 0.5 cm to 8.4 cm on the VAS scale after Ds walking. Soreness only occurred after Ds walking, and the majority of soreness was experienced in the tibialis anterior and the triceps surae (Table 2). This effect subsided during the 4 days after Ds walking, supporting the notion that the Ds walking pattern used in this study involved more eccentric muscle contraction than either Lv or Us walking. Incidently, one subject also reported hip flexor soreness near the inguinal ligament. Another subject reported no soreness in the quadriceps, but rather a generalized feeling of fatigue in the legs.
H-reflexes and M-responses
Changes in H-reflex amplitude occurred as a result of Ds and Lv walking. In Fig. 2 H- Fig. 3A. The model interaction term was statistically significant for H max / M max (P = 0.01). There was a reduction in H max /M max for Post-1 versus Pre for both Ds walking (P < 0.001), and Lv walking (P = 0.02). The change after Ds walking was larger than the change after Lv walking (29.3 AE 6.2% vs. 6.8 AE 5.2%, mean AE SEM, P = 0.02). EMG biofeedback was used to standardize background EMG activity during the collection of H-reflex recruitment curves. As a result, background EMG activity did not change across bouts of testing (Fig. 3C). Therefore, changes in background motor neuron recruitment can be ruled out as a cause of changes in H-reflex size. Results for Hslp/Mslp ratios are illustrated in Fig. 3B. The model interaction term was statistically significant for Hslp/Mslp (P = 0.05). There was an increase in Hslp/Mslp for Us walking only (Post-1 vs. Pre, P = 0.04). Therefore, when spinal excitability was characterized using the Hslp/Mslp method, Us walking had an effect that was not detected using the H max /M max approach.
Rate-dependent depression
The final objective of this study was to evaluate the effect of slope walking on RDD (Fig. 4). There was no detectable background EMG activity during the 100 ms preceding electrical stimulation (data not shown). This is consistent with the study protocol as subjects were asked to keep the leg at complete rest. RDD is expressed as a percentage (i.e., % depression when H-reflexes were elicited at 1 Hz vs. when they were elicited at 0.1 Hz) in Fig. 4B. The model interaction term was statistically significant for RDD (P = 0.01). There was a significant reduction in RDD after Us walking (Pre vs. Post-1, P = 0.04), but there were no changes after Ds or Lv walking (P ≥ 0.14). Therefore, Us walking results in a transient reduction in the ability to diminish afferent input from Ia afferents with repeated activation.
Discussion
This is the first study to evaluate the potential for slope walking to evoke spinal cord plasticity. Slope was used as a way to change the patterns of sensory, motor, and spinal inter-neuronal activity occurring during walking. The primary findings were that both Ds and Lv walking decreased spinal excitability, that this effect was significantly larger for Ds walking, and that this effect correlated well with physical activity level when elicited with Ds walking. Furthermore, Ds walking evoked a relatively minor cardiovascular response and perception of effort. This study also found that Us walking increased spinal excitability and caused a transient reduction in RDD. These observations expand our knowledge of the potential for spinal function to be modulated by a fundamental movement activity when the biomechanics have been altered to change the patterns of neural activity.
Effects of downslope walking
The H max /M max ratio in this study was depressed significantly more following Ds walking than following Lv walking. The H max /M max ratio has been used in previous studies to evaluate short-term spinal plasticity. For example, acute loaded and unloaded cycling exercise causes reduced H max /M max in healthy individuals (Motl and Dishman 2003;Motl et al. 2003a). It has also been reported that when cycle ergometry involves a motor skill component (e.g., variable resistance (Mazzocchio et al. 2006) or visuo-motor challenge (Perez et al. 2005)) there is significantly more reduction in H max /M max . Therefore, acute exercise can cause H-reflex depression, but the likelihood of such an effect is greater if the exercise involves more motor complexity. Results from this study show that a similar pattern occurs with walking. This might suggest that Ds walking involves more motor complexity as it was found here to result in more H-reflex depression. Furthermore, this study is not the first to find an effect of walking on H-reflexes. A recent study found that 30 min of Lv treadmill walking caused H-reflex depression (Thompson et al. 2006). Our study adds that even as little as 20 min of Lv treadmill walking depresses Hreflexes. Two previous studies evaluated the effect of running on H-reflexes (Bulbulian and Bowles 1992;Racinais et al. 2008). Both found H-reflex depression that was more pronounced with a higher running intensity. Upslope walking (which was more intense than either Lv or Ds walking) did not cause a decrease in the H max /M max ratio in this study. This suggests that the propensity for higher Values are reported as mean (SD). *P < 0.05 vs. Pre. intensity locomotion to reduce H max /M max is limited to running. However, Bulbulian and colleagues (Bulbulian and Bowles 1992) also reported a larger reduction in H max /M max for 20 min of Ds (À10%) than for Lv running, both at 50% VO 2 max. The results of this study make it is clear that the effect of using a negative slope for running also applies to walking, despite numerous differences in neural control and mechanics between these two forms of locomotion (Cappellini et al. 2006). This study also found that participants who reported more PA had a larger reduction in the H max /M max ratio after Ds walking. This supports the idea that there are chronic central nervous system adaptations related to increased PA levels (Adkins et al. 2006), and also that such adaptations predispose healthy adults to the Ds walking effect discovered in this study. Highly trained cohorts have been found to present with smaller or larger resting H-reflexes than untrained cohorts, depending on the nature of their training. For instance, competitive power athletes (Casabona et al. 1990;Maffiuletti et al. 2001) and ballet dancers (Nielsen et al. 1993) have smaller H-reflexes, and endurance athletes have larger H-reflexes (Casabona et al. 1990;Maffiuletti et al. 2001). These differences have been attributed to the level of motor skill involved in these athletes' competitive physical activities. However, highly trained cohorts as these were not evaluated in this study. Indeed, in this study there were no correlations between PA and any baseline measures of the H-reflex pathway. As this study is the first to report a decrease in spinal excitability with Ds walking, it is not clear if these very unique athletic populations would respond in unique ways to the Ds walking stimulus. We would hypothesize that there may be subtle differences in the amounts of skilled activity our participants regularly undertake that may have been captured in selfreports of more PA. However, it is not possible to quantify such potential differences with the PA questionnaire A B C Figure 3. H-reflex results before and after 20 min of downslope (Ds), level (Lv), and upslope (Us) treadmill walking. (A) H max /M max ratio, mean + SE, *P < 0.05 versus pre. There was a significant reduction after Ds walking and after Lv walking. (B) Hslp/Mslp ratio, mean + SE, *P < 0.05versus pre. There was a significant increase after Us walking. Prestimulus background EMG activity (100 msec prior to electrical stimulation) (C). There was no change in prestimulus background EMG activity across time. used in this study. Future studies could evaluate the effect of slope walking in distinct athletic populations to determine if high volumes of PA involving more or less motor control impact the slope-walking response. Such investigations could provide insight into the physiological basis of the slope-walking effects.
Effects of upslope walking
This study found that Us walking increased the Hslp/ Mslp ratio and reduced RDD. Increased effort may have prompted these effects as systolic blood pressure, heart rate, and RPE were elevated during Us walking. Increased MU recruitment and effort associated with Us walking may have resulted in increased serotonergic signaling from neurons of the raphe neuclei of the brain stem (Mazzardo-Martins et al. 2010) which have excitatory projections on alpha motoneurons (Barasi and Roberts 1974;White and Neuman 1980). Activity in the descending serotonergic system has been shown to increase in proportion to motor output (Jacobs et al. 2002). Furthermore, Cardona and Rudomin (Cardona and Rudomin 1983) reported a decrease in RDD in response to activation of brainstem serotonergic pathways in the isolated frog preparation. Norepinephrine system activity may have a similar effect as it also projects monosynaptically to motoneurons throughout the spinal cord (Holstege and Kuypers 1987) and increases with increased arousal (Aston-Jones et al. 2000). Both serotonin and norepinephrine enhance the effects of excitatory inputs to spinal motoneurons, producing long-lasting changes in motoneuron excitability (White and Neuman 1980). Therefore, in this study the motoneuron pool may have been less dependent on Ia synaptic transmission to reach firing threshold as a result of Us walking. This would facilitate a more sustained response to repetitive afferent inputs. Decreased RDD resulting from Us walking as found in this study suggests the overall pattern of MU recruitment and incoming sensory information during Us walking results in a change in gaiting of afferent feedback that delegates more control of movement to peripheral reflex pathways. The increase in Hslp/Mslp found after Us walking would also contribute to this neural strategy. This is consistent with the idea that Us walking in healthy adults involves little skill development or motor learning. Rather, Us walking is unique in that it is associated with more MU recruitment (Gregor et al. 2006;Franz and Kram 2012) and force related feedback compared to Lv or Ds walking (Gregor et al. 2006). As such, a decrease in RDD could support an overall strategy to optimize muscle stiffness during Us walking and improve force transmission across skeletal muscle.
In conclusion, this study provides the first evidence that in healthy human participants walking for 20 min at a slow speed, treadmill slope determines the nature of resulting changes in the H-reflex pathway. Although speculative at this time, it is possible that these effects constitute the initial stage of activity-dependent spinal plasticity. Future studies should determine if repeated exposures of either Ds or Us walking convert the outcomes reported here into more permanent adaptations, as happens with motor or exercise training. | 2016-05-04T20:20:58.661Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "96353efe20a6be988f1174b143add7052b58b4f9",
"oa_license": "CCBY",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.12308",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb29850d6b911782bb6ef781a0c2853d9dfb6573",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208359699 | pes2o/s2orc | v3-fos-license | A quick and efficient hydroponic potato infection method for evaluating potato resistance and Ralstonia solanacearum virulence
Background Potato, the third most important crop worldwide, plays a critical role in human food security. Brown rot, one of the most destructive potato diseases caused by Ralstonia solanacearum, results in huge economic losses every year. A quick, stable, low cost and high throughout method is required to meet the demands of identification of germplasm resistance to bacterial wilt in potato breeding programs. Results Here we present a novel R. solanacearum hydroponic infection assay on potato plants grown in vitro. Through testing wilt symptom appearance and bacterial colonization in aerial part of plants, we found that the optimum conditions for in vitro potato infection were using an OD600 0.01 bacterial solution suspended with tap water for infection, broken potato roots and an open container. Infection using R. solanacearum strains with differential degree of aggressivity demonstrated that this infection system is equally efficient as soil-drench inoculation for assessment of R. solanacearum virulence on potato. A small-scale assessment of 32 potato germplasms identified three varieties highly resistant to the pathogen, which indicates this infection system is a useful method for high-throughout screening of potato germplasm for resistance. Furthermore, we demonstrate the utility of a strain carrying luminescence to easily quantify bacterial colonization and the detection of latent infections in hydroponic conditions, which can be efficiently used in potato breeding programs. Conclusions We have established a quick and efficient in vitro potato infection system, which may facilitate breeding for new potato cultivars with high resistance to R. solanacearum.
Besides its worldwide geographic distribution, R. solanacearum possesses an extraordinarily broad host range, causing disease on more than 200 plant species from 50 different botanical families [3]. The pathogen not only infects solanaceous crops such as tomato, eggplant, peanut, pepper and potato, but also other plants from both the dicot and monocot families, and new hosts are being discovered continuously [7]. Due to its wide geographical distribution, broad host range, long persistence in soil and highly aggressivity on plants, the bacterium was ranked by scholars as the second most important bacterial plant pathogen [1].
Potato is currently the third most important staple food crop for human direct consumption just after rice and wheat and it ranks first in energy and protein production per unit of water [8,9]. In protein produced per acre land, potato ranks second to soybean. Importantly, potato is rich in microelements and vitamins essential for the human diet, such as vitamin C, potassium and in fiber [8]. Brown rot caused by R. solanacearum is one of the most notorious potato diseases, estimated to result each year in 1 billion US$ economic losses worldwide [1]. Breeding new potato cultivars with resistance to brown rot is essential for integrated management of this disease. To this end, the development of procedures to facilitate the screening for resistance in germplasm from wild potato collections, progenies from crosses between potato species and potato transgenic lines will help potato breeding programs.
Soil-drench and stem-puncture inoculation are the two methods most widely used to study plant resistance to R. solanacearum [10][11][12][13]. However, adult full-size plants are needed in these procedures with the ensuing use of space, energy and time. Furthermore, since R. solanacearum is a soil-borne root pathogen, stem penetration bypasses potential root resistance mechanisms. The disadvantage of soil-drench inoculation is that the opacity of soil hinders the direct investigation of root responses to the pathogen. Vass et al. inoculated tomato plants grown in hydroponic conditions to study R. solanacearum root colonization [14]. More recently, in vitro pathogenicity assays have been successfully established on tomato [15], Arabidopsis [16,17], petunia [18] and M. truncatula [19]. However, miniaturized in vitro infection assays have not been set up to screen for resistance in potato to the pathogen. We previously generated constitutively luminescent R. solanacearum reporter strains, a tool that we have used to characterize the colonization and the defense responses of potato breeding lines [20,21]. Here we have established a faster, highly efficient, low-cost potato hydroponic infection method to study R. solanacearum-potato interactions. Using this new method, we successfully characterized the virulence of several R. solanacearum strains on potato and screened for potato varieties showing resistance to R. solanacearum. We also simplified the screening process using a luminescent pathogen that can be tracked in vivo in infected plants. This new method will promote the study of potato resistance to R. solanacearum and provide insights for investigating other root pathogens of potato under gnotobiotic conditions.
Development of an in vitro potato infection system for R. solanacearum
Aiming at the quick identification of potato resistance to R. solanacearum, we designed a method for infection in vitro Fig. 1a). A 10 7 -10 8 colony forming units (cfu)/ ml R. solanacearum suspension is used to infect plant hosts such as Arabidopsis, tomato and potato [13,20,22]. Thus, we grew potato plants hydroponically in MS liquid medium for two weeks, injured the roots and transferred them to the same medium containing 1 × 10 8 cfu/ml of R. solanacearum strain GMI1000, belonging to the phylotype I. The leaves of the infected plants started wilting at 4 days post inoculation (dpi). At 5 dpi, almost all plants exhibited severe wilting symptoms, while the leaves of plants mock-treated with water kept green and healthy (Fig. 1b). This suggested that in vitro infection could be employed to test the virulence of R. solanacearum in potato.
Wilt symptoms appear when water transport in the xylem is blocked by lipopolysaccharides produced by R. solanacearum [2]. We thus hypothesized that increased evapotranspiration by opening the growth container's lids could accelerate plant wilting. To test this possibility, plant inoculations were carried out in parallel under opened and closed lid conditions. Although the same number of wilted plants was observed in both conditions, plants in the open jars exhibited more severe wilting symptoms than those in the close jars (Fig. 2a). We further measured R. solanacearum growth in the aerial part of potato plants. The amount of bacteria in plants in the open and close jars was comparable (Fig. 2b), suggesting the air exchange does not affect bacterial growth, but enhances wilting. In nature, R. solanacearum enters root at the emergence sites of lateral roots or root tips [2]. To test whether this natural infection also worked in our in vitro infection system, 2-week-old plants without injury were directly infected with the 1 × 10 8 cfu/ ml R. solanacearum solution. Even at 9 dpi, no potato plant exhibited wilting symptoms in these conditions (Fig. 3a). To confirm successful colonization in plants, we measured bacterial loads in the aerial plant tissues.
The amount of the pathogen in both conditions reached 10 8 cfu/g at 9 dpi (Fig. 3b), which is 10-fold lower than the 10 9 cfu/g attained in the root-cut plants at 5 dpi (Fig. 2b). Therefore, plant inoculation without root injury resulted in symptomless infections due to lower bacterial numbers. Thus, the root-cutting treatment was selected for the following experiments based on its faster, reproducible infection, which correlated with the development of wilting symptoms.
The bacterial concentration (1 × 10 8 cfu/ml) used in laboratory inoculations is usually far higher than that occurring in nature [13,20,22]. To check whether lower inocula could be used for in vitro potato infection, disease development in plants was investigated using a series of bacterial suspensions. Eighty percent of plants infected with the higher bacterial solution concentration (1 × 10 8 cfu/ml and 1 × 10 7 cfu/ml) showed wilting symptoms at 8 dpi. At this time those plants infected with the lowest bacterial concentration (1 × 10 6 cfu/ ml) just started wilting (Fig. 4a). Moreover, potato plants inoculated with 1 × 10 8 or 1 × 10 7 cfu/ml showed similar amounts of bacteria in the aerial part of plant, while those inoculated with the 1 × 10 6 cfu/ml solution contained fivefold less bacteria (Fig. 4b). This suggested that 1 × 10 7 cfu/ml is the optimum R. solanacearum concentration to inoculate potato plants in hydroponic conditions.
In the previous experiments, liquid MS medium containing many nutrients and vitamins (MS−) was used to re-suspend R. solanacearum for infection. To rule out a potential effect of the MS medium on R. solanacearum growth or virulence, we substituted the MS− medium with tap water. Wilting symptoms developed faster and stronger on potato plants in tap water containing R. solanacearum than in MS− medium with the same bacterial concentrations (Fig. 5a). In line with this observation, the amount of bacteria in potato plants treated with the water-resuspended pathogen was fivefold higher than in the potato plants treated with MS− resuspended bacteria (Fig. 5b).
Hydroponic potato infection can be used to study R. solanacearum virulence in vitro
HrpG and HrpB, two key regulators of the bacterial type three secretion system, play critical regulatory roles in R. solanacearum virulence. The hrpG and hrpB . The experiment has been repeated more than three times with similar results mutant strains lose the ability to invade tomato [14,23,24]. To determine whether our in vitro infection system was suitable to evaluate R. solanacearum pathogenesis, we first infected potato plants with R. solanacearum GMI1000 wild type (wt), and the same strain carrying a precise deletion of the hrpG (ΔhrpG) or the hrpB coding sequence (ΔhrpB). The hrpB and hrpG mutants did not cause any bacterial wilt disease when compared with the strong wilting symptoms caused on potato plants by the wild type strain (Fig. 6a). Bacterial growth analysis showed that the potato colonization was similar in hrpG and hrpB mutants, but this bacterial content in stems was 100 fold lower than that of the wild type GMI1000 (Fig. 6b). These data indicate that mutations on HrpB and HrpG abolish R. solanacearum virulence on potato, which is consistent with the fact that both mutants are non-pathogenic on tomato and Arabidopsis [23,25]. Next, we selected wild type R. solanacearum strains different from GMI1000 to test their aggressivity in our hydroponic potato infection system. Brown rot of potato is most commonly caused in the field by a subgroup of R. solanacearum strains belonging to phylotype IIB [26]. UW551 and IPO1609 from this group and the related CFBP2957 and CIP301 strains from phylotype IIA, were selected to investigate their virulence [6]. UW551 and CFBP2957 infection resulted in a strong leaf wilt symptom while IPO1609 and CIP301 did not cause any visible symptom (Fig. 7a). In addition, the multiplication of UW551 and CFBP2957 in potato was 100 fold higher than that of IPO1609 and CIP301 (Fig. 7b). Our results indicate that UW551 and CFBP2957 are much more aggressive on potato than IPO1609 and CIP301. Hence the in vitro infection system we have established here can be used to measure differences in the aggressivity of R. solanacearum stains on potato. Higher evapotranspiration does not accelerate wilting symptom appearance on plants with intact roots. a Representative pictures taken at 9 dpi. "W/T" represents the number of wilting plants with respect to the total of infected plants. "Open" indicates air exchange conditions while "close" indicates no air exchange conditions. b Bacterial colonization in potato stems detected at 9 dpi. Two-week-old potato plants were inoculated with 1 × 10 8 cfu/ml of R. solanacearum strain GMI1000 without wounding the roots. The experiment was repeated twice using 12 plants for each
Evaluation of the resistance of potato varieties to R. solanacearum
Easy identification of potato varieties with high resistance to R. solanacearum is a prerequisite for potato breeding programs. Hence, we performed a small scale experiment to test if our pathosystem could be used to efficiently screen for potato resistance to R. solanacearum. Thirty-two varieties of Solanum tuberosum L. S. tuberosum subsp. andigenum, S. raphanifolium and S. pinnatisectum were grown and inoculated using our hydroponic conditions. The results obtained from six of these varieties and the control susceptible variety Desirée are shown in Fig. 8. Wilting symptoms were clearly abolished on varieties O and P, and delayed on B, M and N species, compared to variety L and the control susceptible variety Désirée (Fig. 8a). Consistent with this, the population of the pathogen in O and P was 1000 fold lower than that in Désirée, suggesting O and P are highly resistant to R. solanacearum (Fig. 8). Interestingly, the pathogen population in M was twofold lower than that in Désirée (Fig. 8b), but significantly delayed wilting symptom development, indicating that M is a tolerant potato variety towards strain GMI1000. Reactive oxygen species (ROS) play a key role in plant defense [27]. To ascertain whether ROS production was triggered in the resistance plants O and P in response to R. solanacearum, we measured it in leaves of O, P and Desirée infiltrated with the pathogen. Comparing with Desirée and O, higher ROS level was detected in P plants at 3 dpi (Fig. 8c). This suggests that ROS signaling may be part of the defense responses of P leading to resistance to R. solanacearum. Moreover, while the leaf-infiltrated Desirée plants exhibited wilting symptoms, O and P plants did not (Fig. 8c). This indicates that the resistance of O against R. solanacearum seems to be controlled by alternative, ROSindependent mechanisms.
The observation of symptomless plants does not always correlate with potato resistance to R. solanacearum as symptomless latent infections often occur [20]. Thus, bacterial counts must be also assayed. However, measuring bacteria in plants is a laborious task that cannot be applied at high scale to screen germplasm for resistance to R. solanacearum. Stable insertion of a lux-CDABE luminescence reporter operon in the R. solanacearum genome has facilitated real-time monitoring of bacterial growth in plant hosts and has been used to evaluate potato resistance in plants grown in pots [20,28]. To improve this screening method, we applied the luxCDABE luminescence reporter in our in vitro infection system. To this end, we used the strain GMI1000 (Pps-Lux) carrying the entire Lux operon under the control of the PpsbA chloroplast promoter, which exhibits strong, constitutive expression when introduced into R. solanacearum [20,29]. As expected, potato plants infected with either GMI1000 or GMI1000 (Ppsba-lux) showed comparable wilting symptoms (Fig. 9a) and bacterial counts (Fig. 9b) in potato at 5 dpi. In addition, light detection with a luminometer showed a strong luminescence signal only in plants infected with GMI1000 (Ppsba-Lux) (Fig. 9c). These data corroborate that insertion of the PpsbA::luxCDABE in the R. solanacearum genome affect neither its colonization ability nor its capacity to cause disease symptoms. The application of a luminescence reporter increases the efficiency of our hydroponic potato infection system for evaluation of potato germplasm resistance to bacterial wilt.
Discussion
The interaction between R. solanacearum and its plant hosts has been established as a model system to study plant resistance to soil-borne bacterial phytopathogens for more than two decades [10,24]. Soil-drench and/or stem penetration inoculations are mostly used to investigate bacterial wilt disease progress on tomato, eggplant, potato, the model plants Medicago truncatula and Arabidopsis [10-13, 20, 30]. Using any of these two infection methods requires a large amount of time and space, with the ensuing high costs. Moreover, the growth in soil prevents the investigation of early root responses to the pathogen [17,20]. To overcome these problems, we set up here an in vitro inoculation assay on potato. The in vitro potato inoculations have previously been used to quantify blackleg disease on shoots, showing results comparable to greenhouse assays [31]. Chen and colleagues successfully identified three SSR alleles related to bacterial wilt resistance from Solanum tuberosum + S. chacoense somatic hybrids through in vitro inoculation of potato plants grown in solid medium [32]. However, their infection protocol was not described explicitly. Here, we thoroughly described a quick, accurate and space-saving potato infection system to monitor R. solanacearum using plants grown under hydroponic in vitro conditions. In our system, four potato plants were directly propagated into a container and infected two weeks later with a R. solanacearum solution. Compared to soil drench inoculation [20] our method saves two-weeks. In our assay, 75% of plants were completely wilted at 2-3 days after the first wilting symptoms were recorded (Figs. 2a, 4a, 5a and 8a), showing that this assay is very stable and repetitive. Our recently established in vitro infection system for Arabidopsis grown on agar plates has shown that R. solanacearum infection changes the root architecture [16,17,33]. This phenomenon could not be observed and investigated by means of a traditional soil-drench or stem penetration inoculations. Thus, our assay provides the possibility to investigate early potato responses to R. solanacearum.
The HrpG and HrpB transcriptional regulators control the virulence of R. solanacearum through modulating the expression of the genes encoding the type three secretion system and its related effectors [25,34]. The deletion of hrpG and hrpB abolished wilt symptom occurrence and restrained the pathogen proliferation in potato plants (Fig. 6), which is consistent with the mutant strains loss ability to infect on the tomato and Arabidopsis [14,17]. However, while the △hrpG deletion mutant grew more than △hrpB in tomato stems [14], these two strains grew to similar levels in potato. This could be a host speciesdependent phenomenon. In line with this hypothesis, the capacity of the △hrpB strain to colonize Arabidopsis seems to be stronger than that of the △hrpG strain [17].
R. solanacearum strains UW551 and IPO1609 belong to race 3 biovar 2, which causes potato brown rot at cool temperatures [26]. In our potato infection assay UW551 was much more aggressive than IPO1609, causing stronger wilting symptoms and increased bacterial growth. In accordance with this, it has been reported that the pathogenicity of IPO1609 was strongly attenuated on tomato and potato relative to UW551 when using a soildrench inoculation method, due to a major deletion present in its genome [26]. We also found that strain CIP301, isolated from potato, did not display strong virulence on potato. Therefore we speculate it may be a hypoaggressive strain similar to IPO1609. CFBP2957 from tomato exhibited hypervirulence on potato in our assay. This is not surprising, as it has been known for long that host range in nature does not always correlate with aggressivity on different hosts under laboratory conditions. For instance, UW551, a potato strain, has been reported to cause strong bacterial wilt on tomato [26]. All these data indicate that this in vitro infection assay is suitable for evaluating the pathogenicity of R. solanacearum strains on potato as accurately as when soil drench inoculation is used. In addition, our hydroponic infection also provides the possibility to directly investigate the interaction between potato root and other soil-borne pathogens.
Three wild type potato lines were identified with higher bacterial wilt resistance among 32 tested candidate lines. This indicates that the in vitro infection system established here can be effectively applied to high-throughput screening for bacterial wilt resistance in potato germplasm. Wilting symptoms are the simplest way to evaluate plant resistance to bacterial wilt. However, symptom recording is time consuming and latent symptomless infections that can cause havoc when environmental conditions change [20,35,36] escape detection. Thus, latent infection limits the application of leaf wilting to evaluate potato resistance to the pathogen. To overcome this problem, we employed a luminescent reporter strain [20,29] in our infection system to be able to quantify bacteria inside the plant, which may not have caused symptoms. Luminescence intensity was positively correlated with bacteria colonization in the infected plant stem (Fig. 9). However, unlike our previous studies [20], bacterial colonization in the infected plant could not be visualized in this work using a light imaging system (ChemiDoc ™ XRST). One reason for this could that the luminescent GMI1000 strain originally isolated from tomato is less aggressive than the luminescent UY031 strain on potato that we used in previous reports [37,38]. In addition, it is possible that the bacterial concentrations carried by the younger plants used here are below the detection limits of the light imaging system. In any case, we could effectively quantify the luminescent bacteria with a luminometer. Compared with colony counting after dilution plating, detection of bacterial luminescence from crushed stems using a 96-well plate luminometer is a faster, more reliable procedure.
Conclusion
In this study, a hydroponic potato infection assay in vitro has successfully been established for R. solanacearum. This assay is less time-consuming, low-cost, accurate and easier to handle comparing with the previously described and widely used infection assays. We demonstrated that it can also be applicable for large-scale screening of potato germplasm for resistance to brown rot disease, which will speed up and increase the efficiency of breeding resistance into potato cultivars.
Plants and strains
Two centimeter shoot explants from Solanum tuberosum L. Désirée; B and N from S. tuberosum subsp. Andigenum; M, O, P from S. Raphanifolium; L from S. Pinnatisectum) were cut and inserted into paper holders which were immersed in 35 ml MS liquid medium (4.405 g/l To prepare bacterial inocula, 2-3 single R. solanacearum colonies (strains GMI1000, UW551, IPO1609, CFBP2957 or CIP301) were transferred into 10 ml liquid B medium (10 g/l peptone, 1 g/l yeast extract and 1 g/l casamino acid) and incubated overnight at 28 °C in a shaker.
In vitro potato infection assay
Overnight R. solanacearum cultures were collected by centrifugation (4000 rpm, 5 min), washed once with MS-/tap water, diluted with MS−/tap water and adjusted to OD 600 = 0.01. Then the bacterial suspensions were distributed into jars, using 35 ml per jar for infection.
Roots of 2-week-old potato plants were cut with scissors 2 cm below the stem and put into the bacterial suspension for inoculation. Inoculated potato plants were kept in the growth chamber under long day conditions (16 h light, 8 h dark), 25 °C and 10,000 lx light and Bacterial content in potato stems measured at 4 dpi. *P < 0.01 (Student's t test) with respect to Désirée. c ROS production in the infiltrated leaves of potatos measured at 3 dpi. Left: DAB staining; Right: representative image of the plants for DAB staining assay taken at 6 dpi. These experiments were repeated at least twice with similar results 70% humidity. At 2 dpi, the lid of the jar containing the infected plants was loosened to allow air exchange. Wilting symptoms on the infected plants were recorded by taking digital images at the indicated times.
DAB staining assay
Plant leaves were directly infiltrated with R. solanacearum solution at OD = 0.001. Infiltrated leaves were detached at the indicated time and immediately immersed into 1 mg/ml DAB solution for overnight in the dark. Then the leaves were de-stained with absolute ethanol and boiled for 10 min and photographed.
Bacteria counting and bacteria luminescence quantification
The aerial part of the infected plants was harvested 1 cm above the level of the liquid in the jars and weighed, then homogenized with pestle and mortar. Two ml double distilled water (ddH 2 O) was added and mixed with the plant material and the homogenates were serially diluted in water and plated on solid B medium. Plates were kept in the 28 °C incubator for 48 h and bacterial colonies were counted. The bacterial contents in the stem (cfu/fresh weight of aerial part of the infected plants) was used to evaluate bacterial virulence or plant resistance.
For luminescence measurement assays, the homogenates from the aerial tissues of infected plants were transferred to a 96-well plate (Nunclone) and the luminescence emitted from the pathogen was measured and quantified with a plate reader infinite 200 Pro (Tecan). Luminescence readings were normalized to the fresh weight of each sample and presented as RLU (relative luminescence units) per gram of fresh tissue.
Availability of data and material
All data generated or analysed during the this study are included in this published article.
Ethics approval and consent to participate Not applicable. Fig. 9 The growth of luminescent R. solanacearum was easily detected in plants. a Representative picture of infected potato plants taken at 5 dpi. b Bacterial content in the stem counted at 5 dpi. c Luminescence of R. solanacearum (Ppsba-lux) detected with a 96-well plate reader using the luminometer mode. This experiment was performed at least twice with similar results. **P < 0.001 (Student's t test) with respect to GMI1000 | 2019-11-30T15:23:33.130Z | 2019-11-30T00:00:00.000 | {
"year": 2019,
"sha1": "4c7cce6424db554ae9ee3315ddad79bd6274cefb",
"oa_license": "CCBY",
"oa_url": "https://plantmethods.biomedcentral.com/track/pdf/10.1186/s13007-019-0530-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c7cce6424db554ae9ee3315ddad79bd6274cefb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
117986994 | pes2o/s2orc | v3-fos-license | Asymptotics with a positive cosmological constant: II. Linear fields on de Sitter space-time
Linearized gravitational waves in de Sitter space-time are analyzed in detail to obtain guidance for constructing the theory of gravitational radiation in presence of a positive cosmological constant in full, nonlinear general relativity. Specifically: i) In the exact theory, the intrinsic geometry of $\scri$ is often assumed to be conformally flat in order to reduce the asymptotic symmetry group from $\Diff$ to the de Sitter group. Our {results show explicitly} that this condition is physically unreasonable; ii) We obtain expressions of energy-momentum and angular momentum fluxes carried by gravitational waves in terms of fields defined at $\scrip$; iii) We argue that, although energy of linearized gravitational waves can be arbitrarily negative in general, gravitational waves emitted by physically reasonable sources carry positive energy; and, finally iv) We demonstrate that the flux formulas reduce to the familiar ones in Minkowski space-time in spite of the fact that the limit $\Lambda \to 0$ is discontinuous (since, in particular, $\scri$ changes its space-like character to null in the limit).
I. INTRODUCTION
A rich theory of isolated gravitating systems, developed systematically since the 1960s [1][2][3][4], lies at the foundation of a large fraction of research in general relativity with zero cosmological constant. Examples include the gravitational radiation theory, classical and quantum aspects of black holes, and several major initiatives in geometrical analysis (see, e.g., [5] for a summary). But observations strongly indicate that the cosmological constant is positive in the universe we inhabit [6]. Therefore it is important to extend the conceptual framework from the Λ = 0 case to the Λ > 0 regime.
In the first paper in this series [5] we began an exploration of this problem. Our findings related to gravitational waves can be summarized as follows. If one considers space-times which are asymptotically de Sitter in the sense introduced by Penrose [4] (more precisely, which satisfy Definition 2 in [5]) then the asymptotic symmetry group is simply Diff(I). Thus, with these boundary conditions, one cannot single out translations or rotations even asymptotically. 1 Consequently, one cannot introduce 2-sphere charges analogous to the Bondi 4-momentum at I [1,3], or calculate fluxes of energy, momentum and angular momentum carried away by gravitational waves [7]. We then examined a common strategy to reduce Diff(I) to the de Sitter group by strengthening the boundary conditions. The idea is to restrict oneself to those space-times for which the intrinsic 3-metric q ab on I is conformally flat. This additional restriction seems natural, because the condition is satisfied in the familiar examples, including the Kerr-de Sitter and Friedmann-Lemaître space-times. Furthermore, the 2-sphere charges at I associated with the Kerr-de Sitter time-translation and rotation yield the expected mass and angular momentum [5].
However, we showed that the additional boundary condition is equivalent to demanding that the magnetic part B ab of the leading order asymptotic Weyl curvature must vanish at I. Now, in the case of Maxwell fields on asymptotically de Sitter space-times, the analogous requirement would be that the magnetic field B a should vanish at I. This requirement would remove half the space of solutions by fiat! By analogy, in the gravitational case, the strengthening of the boundary conditions appears to be physically unjustifiable. Furthermore, irrespective of whether one strengthens the boundary conditions in this manner or not, one does not have expressions of fluxes of energy-momentum and angular momentum carried away by gravitational waves. Indeed, for Λ > 0, no gauge invariant characterization of gravitational waves is available in full general relativity! Thus, we have an apparent impasse: On the one hand the B ab = 0 condition is too strong but, on the other hand, the boundary conditions are too weak without it (both for the gravitational radiation theory and quantum considerations).
A new framework is being constructed to overcome this difficulty and address related issues discussed in [5]. In this paper we will complete the first step of that program by analyzing source-free, linearized gravitational waves in de Sitter space-time. In the Λ = 0 case, the analogous analysis of linearized fields was necessary for the derivation of energy loss due to a time changing quadrupole moment in the weak field approximation (see, e.g., [8]). More generally, it provided considerable intuition and important checks in the final construction of the theory of gravitational waves in exact general relativity [1][2][3][4]9]. In subsequent papers we will see that same is true in asymptotically de Sitter space-times.
The main ideas of this paper can be summarized as follows. We will restrict ourselves to the (future) Poincaré patch of de Sitter space-time because, as we will see in [10], this provides the setting that is appropriate for describing isolated systems in full general relativity. The subgroup of the 10-dimensional de Sitter group that leaves this patch invariant is 7-dimensional, consisting of 4 (de Sitter) translations and 3 rotations; symmetries that enable one to define the total 4-momentum and angular momentum carried by test fields, including linearized gravitational waves. Because of the high degree of symmetry of the Poincaré patch, as is well-known, one can solve linearized Einstein's equation explicitly. By examining the behavior of solutions at I + we will explicitly show that the (linearized analog of the) condition B ab = 0 at I + removes, by hand, half the number of degrees of freedom associated with gravitational waves. This will confirm the expectation from Maxwell's theory.
For test matter, such as scalar, Maxwell or Yang-Mills fields, conserved quantities can be readily constructed using the stress-energy tensor. For linearized gravitational fields, on the other hand, we do not have a gauge invariant, local stress-energy tensor because in general states which are necessary, e.g., to systematically discuss whether the quantum evaporation of black holes is unitary. relativity gravity is absorbed into space-time geometry. Therefore a new strategy is needed. A convenient route is provided by the covariant Hamiltonian framework where the phase space Γ Cov consists of solutions to linearized Einstein's equation. Diffeomorphisms generated by isometries have a well-defined action on Γ Cov which preserves the natural symplectic structure ø on Γ Cov . The Hamiltonians generating these canonical transformations provide us with formulas of energy-momentum and angular momentum carried by gravitational waves. We will express these quantities in terms of fields that are well-defined on I + . These expressions will be needed in the derivation of the energy loss due to a time-changing quadrupole moment in the Λ > 0 case, derived in [11].
Finally, we discuss the Λ → 0 limit. Physically one expects that in this limit energymomentum and angular momentum expressions should reduce to the well-known ones for linear gravitational waves in Minkowski space. However, the limit is delicate because of conceptually important discontinuities. In particular, while I + is space-like for every Λ > 0, it is null for Λ = 0. Similarly, while the generator of every de Sitter 'time translation' (used to define de Sitter energy) is space-like in a neighborhood of I + for any Λ > 0, it is time-like in a neighborhood of I + for Λ = 0. Consequently, while the flux of energy at de Sitter I + can be arbitrarily negative no matter how small Λ is, it is strictly positive in the Λ = 0 case. We provide a detailed, systematic procedure to take the limit and show that the de Sitter fluxes do go over to the Minkowski fluxes in the limit. This procedure will be useful in reliably estimating the errors one makes by working in the asymptotically flat context rather than asymptotically de Sitter.
The paper is organized as follows. In section II we collect results on the geometry of the Poincaré patch that will be used throughout our discussion and show that, for test Maxwell fields, the familiar fluxes of (the de Sitter) energy-momentum and angular momentum obtained using the stress-energy tensor can be derived using Hamiltonian methods that do not refer to the stress-energy tensor. In section III we study the asymptotic behavior of the explicit solutions to the linearized Einstein's equation in the Poincaré patch and analyze the consequences of the B ab = 0 condition. In section IV we introduce the covariant phase space Γ Cov of linearized gravitational fields in the Poincaré patch, derive expressions of Hamiltonians associated with the seven isometries, and express their limits to I + using fields that have well-defined limits there. In section V, we derive two properties of these fluxes. First, we show that when the subtleties associated with the Λ → 0 limit are taken into account, our flux expressions of section IV do reduce to the standard flux formulas associated with linear gravitational waves in Minkowski space-time. Second, we show that although gravitational waves in de Sitter space-time can carry arbitrarily large negative energy, for the class of solutions that are of direct physical interest in the investigation of isolated systems, they carry positive energy.
Our conventions are as follows. Throughout we assume that the underlying space-time is 4-dimensional and the space-time metric has signature -,+,+,+. The curvature tensors are defined via:
II. PRELIMINARIES
This section is divided into two parts: i) symmetries of the Poincaré patch; and, ii) the covariant phase space and conserved quantities associated with these symmetries.
A. The Poincaré patch
In the Λ = 0 case, to study isolated systems in the weak field limit, one investigates linearized gravitational fields in Minkowski space-time. For the Λ > 0 case, it may seem natural to replace Minkowski space with de Sitter space-time. However, because of the differences in causal structures of these two space-times, an important difference arises. Consider an isolated system -such as a single star or a binary-that is confined to a spatially bounded world-tube for all times (see the left panel in Fig. 1). In this case the matter world-tube has future and past end-points in both Λ = 0 and Λ > 0 cases, denoted by i ± . However, whereas in the Λ = 0 case the future of i − is the entire Minkowski space-time, if Λ > 0, it is only the future Poincaré patch of de Sitter. No observer whose world-line is confined to the past Poincaré patch can see the isolated system or detect the radiation it emits. Therefore, to study this system, it suffices to restrict oneself just to the future Poincaré patch rather than the full de Sitter space-time. Indeed, while it is difficult to impose the physically appropriate 'no incoming radiation' boundary condition at I − [12], as we will see in [10], this condition can be naturally imposed at the cosmological horizon E + (i − ) that constitutes the past boundary of this Poincaré patch. Because our primary purpose is to develop intuition for the full, nonlinear theory, we will restrict ourselves to this future Poincaré patch, although all our results can be readily extended to the full de Sitter space-time. Next, for easy comparison with the rich literature on gravitational waves in cosmology, we will use coordinates η, x, y, z; with x, y, z assuming their full range on R 3 and the conformal time η ∈ (−∞, 0) (see the right panel in Fig. 1). Then the de Sitter metric can be expressed as: where H := Λ/3 =: 1/ is the Hubble parameter, the inverse of the cosmological radius . 2 While these coordinates are extremely convenient in the detailed calculations of gravitational perturbations, it is obvious that they are ill-suited for taking the limit Λ → 0. To take this limit, it is simplest to use proper time t, which is related to the conformal time η via Hη = −e −Ht . In terms of t, the de Sitter metric becomes and it is manifest that the metric coefficients go to those of Minkowski metric coefficients as Λ goes to zero. Therefore, to compare geometric structures in de Sitter space-time to those in Minkowski space, it is important to use the differential structure induced on the Poincaré patch by (t, x), and not by (η, x)! Locally, of course, this metric admits 10 (de Sitter) Killing fields. However, since the Poincaré patch is only a part of the de Sitter space-time, only those isometries are permissible that map this patch to itself. Therefore, we now have to restrict ourselves only to those Killing fields that are tangential to its boundary E + (i − ) in the full de Sitter space-time. As discussed in detail in section 4.C.2 of [5], these Killing fields constitute a 7-dimensional family. We have 3 spatial translations T a (i) and 3 spatial rotations R a (i) , tangential to each η = const slice, generating the Euclidean group. In addition, there is a 7th Killing field We will refer to T a as time translation because: i) it is the limit of the time translation Killing field in the Schwarzschild-de Sitter space-time as the mass goes to zero, and, ii) in the (t, x) coordinates, it reduces to a time-translation in Minkowski space-time as Λ → 0. 3 The commutation relations between these seven Killing fields are given by: (2.4) (Note that the time translation does not commute with space-translations.) We will denote this 7-dimensional Lie-algebra of symmetries of the Poincaré patch by g Poin and the Lie group it generates by G Poin . 4 Finally, in the standard conformal completion of the Poincaré 2 These coordinates -as well as the coordinates (t, x) discussed below-have the disadvantage that they do not cover the past boundary of our Poincaré patch, i.e., the event horizon E + (i − ) of i − . But this limitation will not affect our considerations. 3 Limit Λ → 0 of T a illustrates the importance of using the correct differential structure to take this limit.
Had we used the differential structure provided by (η, x) we would have concluded from (2.3) that T a vanishes in the limit. But this procedure would have been incorrect because the metricḡ ab diverges in this limit (although it reduces to the well-defined Minkowski metric if the limit is taken using the differential structure induced by (t, x)). Note, incidentally, that T a is sometimes referred to as 'dilation' because it is the conformal-Killing vector field representing a dilation with respect to the flat metricg ab . 4 This is the group that leaves the point i − on I − of de Sitter space-time invariant. As Λ → 0, G Poin reduces to a well defined seven dimensional subgroup of the Poincaré group; the limit carries the memory of the preferred t = const slicing.
patch, I + has R 3 topology and this 7-dimensional group preserves the completeness of the allowed class of metrics on I + [5].
B. Maxwell fields in de Sitter space-time
As is well-known, each Killing symmetry K a leads to a conserved quantity. For matter fields -such as the Maxwell field F ab -the standard procedure is to use the stress-energy tensor T ab = 1 4π (F am F bnḡ mn − (1/4)ḡ ab F cd F mnḡ cmḡdn ) . The conserved quantity associated with a Killing field K a is given by where the integral is taken over any Cauchy surface Σ with unit normal n a . F K may be regarded as the 'flux' of the conserved quantity across Σ. However, for the linearized gravitational field, we do not have a gauge invariant, locally defined stress-energy tensor. We will now show that, in the Maxwell theory, the expression (2.5) of F K can also be obtained using a covariant phase space framework without having to refer to the stress-energy tensor. In section III we will use this alternate method to calculate conserved quantities for the linearized gravitational field.
Consider a globally hyperbolic space-time, (M, g ab ) with a Killing field K a . Denote by Γ Max Cov the space of all suitably regular, source-free solutions F ab to Maxwell equations ∇ [a F bc] = 0 andḡ ac ∇ c F ab = 0. Starting from the Maxwell Lagrangian, one can show that Γ Max Cov is naturally endowed with a symplectic structure (i.e., a closed, non-degenerate 2-form) ω Max : Here F and F are any two solutions to Maxwell equations, A a is any vector potential for F ab (i.e., F ab = 2 ∇ [a A b] ) and Σ is again any Cauchy surface. Using Maxwell equations (and the suitable fall-off implicit in the regularity condition) it is easy to verify that the right side is independent of the choice of the Cauchy surface Σ and is gauge invariant. The pair (Γ Max Cov , ω Max ) is the Maxwell covariant phase space. Each Killing field K a on M naturally defines a vector field K on Γ Max Cov via: K | F ≡ δ K F := L K F ab . Not surprisingly, the flow generated by K on Γ Max Cov preserves the symplectic structure ω Max , i.e., defines a 1-parameter family of canonical transformations on (Γ Max Cov , ω Max ). The Hamiltonian generating this flow is a function H K on Γ Max Cov given by: For any Killing field K a one can verify that H K defined in (2.7) equals F K defined in (2.5).
(For details on the covariant phase space of fields, including general relativity, see, e.g., [13].) Let us illustrate this result for the Killing fields in the Poincaré patch. Let us first set K a = S a , where S a stands for any one of the 6 Killing fields T a (i) and R a (i) , tangential to the space-like slices Σ given by η = const. Then, we have where E a := F ab n b and B a := * F ab n b are the electric and magnetic parts of the Maxwell field, and abc the alternating tensor on the slice Σ. Thus, as one would expect, F S is the flux of the S-component of the Poynting vector abc E b B c across Σ. Next, let us consider the Hamiltonian (2.7) generated by S: where in the first step we have integrated by parts and in the second step used Cartan identity and the Maxwell equationD a E a = 0. Thus, using the covariant phase space we can recover the conserved quantity F S as the Hamiltonian H S defined by the Killing symmetry S a . Because of conformal invariance of Maxwell equations, we can easily take the limit as Σ approaches I + and express the conserved flux as an integral over I + . The expression (2.8) brings out the fact that if the magnetic field vanishes at I + , then that electromagnetic wave carries no angular momentum or linear momentum. For the time translation T a , the argument establishing the equality of F K and H K is the same but the calculation is a little more involved because T a has components both along and orthogonal to the cosmological slices (see Eq.(2.3)). We find: (2.10) In the limit as Σ approaches I + , T a becomes tangential to I + (since η = 0 at I + ) and g ab vanishes. Therefore the expression of the conserved energy reduces to an integral of the component of the Poynting vector along T a : where the electric and magnetic fields and the alternating tensor are calculated using any conformally rescaled metric that is regular at I + (e.g.,g ab ). This expression brings out two interesting facts. First, in de Sitter space-time while the energy carried by electromagnetic waves is conserved as in Minkowski space-time, now it can be negative and is unbounded below. Second, if we restrict ourselves to Maxwell fields whose magnetic field vanishes at I + , then those electromagnetic fields carry no energy either. Note that the second result is specific to I + : If the magnetic field vanishes on a cosmological slice η = const = 0, the energy of that Maxwell field does not vanish unless the Maxwell field itself vanishes identically. The 3-momentum and the angular momentum, on the other hand do vanish. To summarize, for Maxwell fields, the conserved quantities associated with Killing fields in the Poincaré patch can be recovered as Hamiltonians on the covariant phase space, without any reference to the stress-energy tensor. Also, because all Killing fields K a on de Sitter space-time are tangential to I + -and hence space-like-one can express every conserved quantity F K as an integral across I + of the component of the Poynting vector along K a . This expression brings out the fact that if we were to require that the magnetic field vanish at I + , we would be left with electromagnetic waves that carry no 3-momentum or angular momentum, nor energy defined by de Sitter isometries!
III. LINEARIZED GRAVITATIONAL FIELDS
As in section II A, we will use the (η, x) chart and the form (2.1) of the de Sitter metric g ab in the Poincaré patch. The perturbed metric will be denoted by g ab , where is the smallness parameter and γ ab denotes the first order perturbation. Then, in the Lorentz and radiation gauge, i.e., when the gauge freedom is exhausted by requiring that γ ab satisfy ∇ a γ ab = 0; γ ab η a = 0; and γ abḡ ab = 0, the linearized Einstein's equation simplifies to Here η a is a vector field normal to the cosmological slices with η a ∂ a = ∂/∂η.) Following a common strategy in the cosmology literature, it is convenient to rewrite (3.1) as since calculations are simpler in terms of the mathematical field h ab than in terms of the physical perturbation γ ab . Indeed, the gauge conditions (3.2) can now be written using the background flat geometry ofg ab : ∇ a h ab = 0; h ab η a = 0; and h abg ab = 0 , (3.5) and the linearized Einstein's equation becomes where h ab ≡ η c∇ c h ab . Note that the gauge conditions and linearized Einstein's equation satisfied by h ab are the same as those satisfied by the linearized gravitational fields in Minkowski space-time in absence of a cosmological constant except for the extra term (2/η) h ab in the linearized Einstein's equation. In the (t, x) differentiable structure that is well-suited to take the limit Λ → 0, the extra term (2/η)∂ η h ab = −2H∂ t h ab goes to zero, just as one would expect.
As in the case of linearized fields in Minkowski space-time, it is simplest to find explicit solutions using a Fourier transform: where (s) labels the two helicity states and e Here, and in what follows,q ab is the fixed spatial Euclidean metric on the cosmological slices, tailored to the co-moving coordinates x, and denotes complex conjugation. The two functions h (s) k (η) capture the gauge invariant information -the transverse traceless modesof the linearized gravitational field. Since h ab ( x, η) are real fields, it follows that The field equation (3.6) implies that the h (s) k satisfy the ordinary differential equation (ODE): where the prime denotes differentiation with respect to η, and k 2 = k · k. The second order ODE (3.10) can be readily solved to obtain the general solution are arbitrary coefficients (in the Schwartz space), determined by the initial data of the solution. (These coefficients can also depend on Λ. We did not make this dependence explicit because in the main text we work with a fixed value of Λ.) Substituting (3.11) in (3.7) we obtain the general solutions h ab representing first order perturbations.
Next, let us discuss curvature. Since the Weyl tensor of de Sitter space-time vanishes, the first order perturbations (1) E ab and (1) B ab of the electric and magnetic parts of the Weyl curvature are gauge invariant and can be expressed directly in terms of the solutions h k (η) in (3.11). To find these expressions, we first note that, in exact general relativity, the electric and magnetic parts are related to the first and second fundamental forms q ab and K ab on any space-like surface via where D, abc and R ab are the derivative operator, alternating tensor and the Ricci curvature of the 3-metric q ab , and 4 R ab is the Ricci curvature of the space-time metric g ab . It is straightforward to linearize these equations using the cosmological foliation on the de Sitter background. Calculations are simplified by noting that: (i) E ab and B ab are conformally invariant, and, (ii) a convenient conformal completion of de Sitter is provided by choosing the conformal factor Ω = − Hη, so that the conformal metric Ω 2ḡ ab that is well behaved at I + is just the Minkowski metricg ab in the (η, x) chart. Therefore, in effect, linearization can be carried out using this flat background metric. The perturbed electric and magnetic parts of the Weyl tensor can be expressed using h ab and geometric structures associated with the flat 3-metricq ab on each cosmological slice: Recall that the boundary conditions at I + imply that the Weyl curvature of an asymptotically de Sitter metric must vanish at I + [4,5]. Therefore, the first order perturbations (1) E ab and (1) B ab of Weyl curvature also vanish at I + and admit smooth limits there. We will refer to E ab and the B ab as the perturbed electric and magnetic parts of the Weyl curvature as a short hand since it is these quantities that will feature in most of our discussion. Using explicit solutions (3.11) it is easy to verify that they do indeed admit smooth limits to I + : where * e 11). Thus, the condition that the magnetic part vanish at I + -or, that conformal flatness of the 3-metric at I + be preserved to first order-removes, by fiat, half the degrees of freedom from consideration. The first expectation based on Maxwell fields is explicitly borne out. In section IV we will show that the second expectation is also borne out: the remaining gravitational waves, that do preserve conformal flatness to first order, carry no energy, momentum or angular momentum.
Remarks:
(i) The explicit solution (3.11) shows that, as one approaches I + (i.e. as η → 0), the term associated with E (s) k vanishes while the term associated with B (s) k survives. In the cosmology literature, the first is referred to as the 'decaying mode' and the second as the 'growing mode'. Thus, the requirement that the magnetic part of the perturbed Weyl curvature vanish at I + removes by fiat the growing mode and leaves only the decaying mode. These perturbations h ab vanish at I + .
(ii) Let us return to the linearized Einstein's equation (3.6) satisfied by h ab . While one can think of h ab as a field propagating on the Minkowski metricg ab , because of the additional term (2/η) h ab , the propagation is not sharp; there is a 'tail term'. On the other hand, the linearized Weyl tensor satisfies conformally invariant equations. Its propagation does not have a tail term. Interestingly, the same is true of the time derivative of the metric perturbation: One can verify that it satisfies the conformally invariant equation, ( − ( 4R /6))h ab = 0. Equivalently, sinceḡ ab = (1/H 2 η 2 )g ab , it follows that˚ [(1/η) h ab ] = 0. Therefore it follows that the propagation of (1/η) h ab on (M,g ab ), and hence of h ab on (M,ḡ ab ), is in fact sharp, without any tail terms. This fact has an interesting implication in the discussion of the quadrupole formula [11].
IV. THE HAMILTONIAN FRAMEWORK
This section is divided into two parts. In the first, we construct the covariant phase space of source-free, linearized gravitational fields on the de Sitter background. In the second, we obtain expressions of energy, momentum and angular momentum carried by gravitational waves by computing the Hamiltonians corresponding to the seven Killing fields on the Poincaré patch.
A. The covariant phase space
For linearized gravitational fields, the covariant phase space Γ Cov can be taken to be the space of solutions γ ab to the equations (3.2) and (3.3). For simplicity, we will assume that the solutions of interest have initial data in the Schwartz space of rapidly decreasing, smooth fields, although these conditions can be weakened considerably. The standard procedure (see, e.g. [13]) endows Γ Cov with a symplectic structure ω. Restricted to the cosmological slices Σ (given by η = const), it becomes: where h ab is related to the physical metric perturbation γ ab via γ ab = a 2 h ab (see Eq. (3.4)) and κ = 8πG. It is easy to verify that (3.5) and (3.6) imply that the integral is independent of the η = const slice on which it is evaluated. This form of the symplectic structure is useful in calculations within the Poincaré patch. Furthermore, as we will see in section IV B, it is well-adapted for taking the limit Λ → 0.
In the cosmology literature, one often works with the functions h in place of the tensor fields γ ab or h ab . (The factor of √ 4κ is introduced to endow φ (s) with the standard dimensions of a scalar field, so that the scalar and tensor perturbations can be treated in a completely parallel manner. See, e.g., section 3.D of [14].) These are referred to as the two tensor modes. It is straightforward to verify that these fields satisfy the wave equation in de Sitter space-time φ (s) ( x, η) = 0 .
This form of the symplectic structure is useful to compute expressions of fluxes of energymomentum and angular momentum that are adapted to the 'tensor modes' used in the cosmological perturbation theory. However, expressions (4.1) and (4.4) of the symplectic structure have one drawback: because of the multiplicative factor a 2 (η) = (1/H 2 η 2 ), they are not well-suited to take the limit to I + (where η = 0). While, the limit itself is well defined because the symplectic structure is independent of η, to express physical results -e.g. the formula of energy-in terms of fields that are well defined at I + , one has to be extremely careful in keeping track of terms in the integrand which tend to zero at the appropriate rate to compensate for the apparent blow up as 1/η 2 due to the pre-factor in front of the integral. Also, these expressions are not gauge invariant as they use specific gauge conditions (3.5). To overcome these limitations, it is convenient to recast the expression (4.1) using the relation between the perturbed electric part of the Weyl tensor E ab and the metric perturbation, that holds on any cosmological slice. Substituting for h ab in terms of E ab and simplifying by performing integrations by parts, we obtain: We will use both expressions, (4.1) and (4.6), of the symplectic structure on Γ Cov in our discussion of the conserved fluxes associated with the 7 Killing vectors. (The equivalent form (4.4) in terms of the Klein-Gordon fields φ (s) turns out not to be as useful in providing hints for the full, nonlinear theory.) We will conclude this discussion by pointing out several consequences that follow immediately from the form (4.6) of the symplectic structure. First, it is transparent that (1/2Hκ) E ab can be regarded as the momentum that is canonically conjugate to the metric perturbation h ab . Second, as we saw in section III, the perturbations h ab as well as the perturbed electric part of the Weyl tensor E ab admit well defined limits to I + . Therefore, one can take the limit Σ → I + simply by evaluating the integral (4.6) on I + . This feature will facilitate our task of expressing energy, momentum and angular momentum in terms of asymptotic fields at I + . In turn, these expressions will be directly useful in [11] to obtain a formula for the energy emitted by a time changing quadrupole, and establishing its positivity. The third and more important feature is gauge invariance. Note first that E ab by itself is gauge invariant, it is tangential to the cosmological slices, and it is divergenceand trace-free. This fact enables us to drop the gauge fixing conditions (3.5) and consider general perturbations. For, if either γ ab (or, γ ab ) is a pure gauge field -i.e. of the form ∇ (a ξ b) for a space-time vector field ξ a -properties of E ab ensure that the expression (4.6) of ω(h, h) vanishes identically. Thus, the passage from h ab to E ab using (4.5) has provided us with a manifestly gauge invariant expression (4.6) of the symplectic structure. Finally, using the explicit solutions (3.11), we can re-express the symplectic structure in terms of the coefficients E k and B k : where, as before, denotes complex-conjugation. Consequently, the pull-back of the symplectic structure to the subspace of Γ Cov on which B ab vanishes on I + -or, alternatively, on which E ab vanishes on I + -is identically zero. These subspaces are among the maximal Lagrangian subspaces of Γ Cov . In this respect the situation is again completely parallel to that in the Maxwell theory.
B. 3-momentum, angular momentum and energy carried by gravitational waves
We can now calculate the Hamiltonians on Γ Cov corresponding to the seven Killing fields on the Poincaré patch. Recall from (3.4) that the physical metric perturbation is γ ab = a 2 h ab and it satisfies the gauge conditions (3.2) and linearized Einstein's equation (3.3) that refer only to the background de Sitter metricḡ ab . Therefore, if γ ab ∈ Γ Cov , then so is γ (K) ab := L K γ ab , for any Killing field K a ofḡ ab . From the definition (3.4) of h ab , it follows that with a = −1/(Hη). As in the Maxwell case, the isometries generated by each of the seven Killing fields K a in g Poin provide a 1-parameter family of canonical transformations on Γ Cov . From general results on the covariant phase space [13] it follows that the corresponding Hamiltonian is again given by Recall from section III that if B ab = 0 at I + , then h ab also vanishes there. In this case, then, we have H S = 0. Thus, although there do exist linearized gravitational waves that retain conformal flatness of the induced geometry at I + to first order, they carry no energy, 3-momentum or angular momentum.
We will now compute the Hamiltonians (4.9) for the seven Killing fields in g Poin .
3-momentum and angular momentum
As in the case of Maxwell fields discussed in section II B, the calculations are identical for the 3 spatial translations T a (i) and the 3 rotations R a (i) . Let us therefore again denote by S a any of these six Killing fields and calculate the 3-momentum or angular momentum H S , and then discuss energy H T separately. For these six Killing fields, we have h (S) ab = L S h ab since these fields are all tangential to the η = const surfaces. Furthermore, from (4.5) it follows that the corresponding perturbed electric part of the Weyl tensor, E (S) ab , is given by: Therefore (4.9) becomes: 11) where, in the second step we have integrated by parts. Thus, the expressions of 3-momentum and angular momentum mirror those in the Maxwell theory. Since the integrand in (4.11) refers only to the fields h ab , E ab and the metricq ab , all of which have smooth limits to I + , to take the limit Σ → I + we just have to evaluate (4.11) on I + . Finally, let us consider the limit Λ → 0 of H S . Since the Hubble parameter H tends to zero in this limit, from the form of (4.11), the limit seems divergent at first sight. However, this conclusion is incorrect because fields in the integrand also depend on H. Let us therefore analyze the limit more carefully. As explained in section II, to take this limit, we should use the differential structure induced by the chart (t, x) on the Poincaré patch (and not by the chart (η, x)). Then, for the background geometry, we find that as Λ → 0, we havē Denote the space of solutionsh ab to these equations byΓ cov . It is straightforward to verify that in the limit Λ → 0, the symplectic structure ω on Γ Cov goes over to the standard symplectic structureω onΓ cov : where the integral is taken on a t = const slice. Finally, the limitH S of the Hamiltonian H S = (−1/2) ω(h, h (S) ) is given by: This is precisely the expression of the linear and angular momentum of linearized gravitational waves in Minkowski space-time. Thus, although the procedure of taking the limit Λ → 0 is rather subtle, the de Sitter 3-momentum and angular momentum (4.11) do reduce to the standard conserved fluxes in Minkowski space-time.
Remark: While taking the limit, we assumed the existence of a family h ab (Λ), satisfying the gauge conditions (3.5) and linearized Einstein equation (3.6) for each Λ, that admits a smooth limith ab on the Poincaré patch as Λ → 0. An explicit example of such a family is provided by setting in (3.11) (4.16) A careful calculation shows that in the limit Λ → 0 the field h Thus, a general solution to the linearized Einstein's equation in Minkowski space in the transverse, traceless, radiation gauge can be obtained as a limit of this family h ab (Λ).
Energy
Next, let us consider the energy H T defined by the time translation T a of (2.3). In this case, the calculation is not as straightforward because: (i) the vector field T a is not tangential to the cosmological slices η = const except at I + ; (ii) h and, (iii) a detailed calculation shows that E
(T )
ab also has an extra term: E (4.20) Once these differences are taken into account, the conserved energy-flux H T across Σ can be calculated using (4.9). We have: where, as before, Σ is any cosmological slice. However, since T is not tangential to Σ, on a general cosmological slice we cannot integrate by parts as we did for H S . Again the limit to I + is straightforward since all fields in the integrand have smooth limits to I + . Furthermore, in the limit T a becomes tangential to I + enabling us to simplify (4.21) further: where in the second step we have used the explicit solutions (3.11) for h ab in terms of Fourier modes. Since B ab = 0 if and only if B (s) k = 0, the last expression makes it explicit that if a gravitational wave does not change conformal flatness of the intrinsic geometry at I + to first order, it does not carry energy. Finally, the expression (4.9) of H K is linear in K a for all Killing fields. Therefore, H λT = λH T for all real numbers λ. For linearized gravitational waves on Minkowski space-time, energy is positive definite and vanishes if and only if the perturbation is pure gauge. On the de Sitter space-time, the conserved energy H T can have either sign and we have an infinite dimensional subspace of the physical, transverse-traceless modes for which the energy vanishes. From (4.22) it is clear that energy also vanishes if E ab vanishes on I + . (The other possibility, L T h ab = −2Hh ab on I + is not realized because such perturbations would not be in the Schwartz space on I + of the Poincaré patch, which is topologically R 3 [5].) Finally, let us consider the limit Λ → 0 of the conserved energy H T . For reasons given in section IV B 1, we have to use the differential structure induced by the coordinates (t, x) and work with a cosmological slice in the Poincaré patch with η = 0. Let us again suppose that we have a 1-parameter family of perturbations h ab (Λ) that satisfy the gauge conditions (3.5) and the linearized Einstein equation (3.6), and admit a smooth limith ab as Λ → 0. As discussed above,h ab is a metric perturbation on the Minkowski metric η ab , satisfying (4.13). The limitH t of the Hamiltonian H T = (−1/2) ω(h, h (T ) ) is given by: where in the second step we have used (4.13) and integrated by parts. This is precisely the conserved energy flux of the linearized gravitational fieldh ab in Minkowski space-time. Thus, our energy expression (4.21) for linearized gravitational fields in de Sitter space-time does have the expected limit as Λ → 0. Note that the limit is quite subtle and discontinuous: While H T can be negative and arbitrarily large, no matter how small the positive Λ is, in the limit Λ → 0 we obtainH T which is positive definite! Geometrically, this occurs because while the Killing field T a of de Sitter metricḡ ab is space-like in the 'upper half of the Poincaré patch' for every Λ > 0, its limit, the Killing field t a of η ab , is time-like everywhere.
Remarks: (i) In the cosmological literature, the discussion of 'energy' often refers to the Hamiltonians H η or H t that generate evolution along the conformal time η or proper time t. Since η a and t a are not Killing fields, these Hamiltonians are not conserved. Thus, they are unrelated to the conserved energy H T discussed above and are not the analogs of the standard notion of energy in Minkowski space-time, used in the gravitational radiation theory. (ii) As discussed in section IV A, in cosmology one often encodes the metric perturbations γ ab ( x, η) in the two 'tensor modes' φ (s) ( x, η) that satisfy the Klein-Gordon equation with respect to the de Sitter metricḡ ab . On the Klein-Gordon phase space Γ KG Cov , the isometry generated by any Killing field K a again defines a 1-parameter family of transformations that preserve the symplectic structure ω KG . As one would expect, the corresponding Hamiltonians agree with the H K obtained above for all seven Killing fields. That is, our energy-momentum and angular momentum expressions H T and H S hold both for the metric perturbations γ ab satisfying (3.2) and (3.3) and the 'tensor modes' φ (s) satisfying the wave equation (4.3) in the Poincaré patch. (iii) Finally, we note that the explicit solutions (3.11) are widely used in the cosmological literature on linearized gravitational waves. However, the primary interest there is on the effect of these gravitational waves on the polarization of the CMB electromagnetic waves. To our knowledge, this literature does not contain the analysis of the asymptotic behavior of these perturbations at I + , or the implications of the assumption that the perturbations preserve conformal flatness of I + to linear order, nor a discussion on the isometry group G Poin that preserves the Poincaré patch, or the associated conserved fluxes H K given above.
V. DISCUSSION
In the Λ = 0 case, there is a well-developed theory of isolated systems and gravitational radiation in full, nonlinear general relativity that has played a dominant role in a number of areas of gravitational science. In the first paper [5] in this series, we showed that there are significant conceptual obstacles in extending this theory to allow a positive cosmological constant, however small, because the limit Λ → 0 is discontinuous. In particular, whereas I is space-like, no matter how small Λ is, it is null when Λ vanishes. If Λ were zero and the accelerated expansion of the universe is caused by some matter field rather than a cosmological constant, that field would not have the asymptotic fall-off we are familiar with in the Λ = 0 case, and space-time curvature far away from the sources would be similar to that in asymptotically de Sitter space-times. Therefore, difficulties discussed in [5] would persist also in the Λ = 0 case if the observed accelerated expansion continues to infinite future. To overcome these obstacles, one needs a new framework. In this paper we completed the first step to this goal by discussing linear gravitational waves in de Sitter space-time.
Motivated by considerations of isolated systems discussed in section II A, we focused on the upper Poincaré patch of de Sitter space-time. Isometries generated by 7 of the 10 de Sitter Killing fields leave this patch invariant. This group G Poin is generated by 3 space-translations and 3 rotations that are tangential to the cosmological slices and a time translation that is transversal to them. Therefore, one expects well defined notions of linear and angular momentum, and energy, associated with any physical field on the Poincaré patch. We showed in section II B that, in the case of Maxwell fields, these 'conserved fluxes' arise as the Hamiltonians generating canonical transformations induced by the action of Killing fields on the covariant phase space Γ Max Cov . Furthermore, in the Λ = 0 case, the Hamiltonian framework has been used very effectively also for gravitational waves in full, nonlinear general relativity: It leads to flux integrals corresponding to the Bondi-Metzner-Sachs (BMS) asymptotic symmetries [7]. Therefore, it is natural to use this strategy also in the Λ > 0 case.
Since the covariant phase space consists of solutions to the field equations, in section III we discussed the asymptotic properties of solutions to linearized Einstein's equation in de Sitter space-time. In section IV we constructed the covariant phase space (Γ Cov , ω) of these linear gravitational waves. Each of the 7 Killing fields K a naturally defines flow on Γ Cov that preserves the symplectic structure ω thereon, and thus defines a Hamiltonian H K . These Hamiltonians provided us with the expressions (4.11) and (4.21) of fluxes of energymomentum and angular momentum carried by gravitational waves. Furthermore, we could express these conserved fluxes in terms of fields defined on I + .
These results have a number of interesting features. First, to make the full nonlinear theory manageable in the Λ > 0 case, at first it seems natural to strengthen the boundary conditions by requiring that the intrinsic geometry of I + be conformally flat, as in de Sitter space-time. However, almost 30 years ago Friedrich showed that the freely specifiable data at I − consists, up to arbitrary conformal rescalings, of a freely specifiable Riemannian metric and a trace-free, symmetric tensor field of valence two, which satisfies a divergence equation [19]. Therefore, by applying those results to I + (in place of I − ), it follows that demanding conformal flatness of the metric at I + removes by hand part of this free data.
In the linearized approximation, we could sharpen the implication of this condition. First, because the perturbed electric and magnetic parts of the Weyl curvature are gauge invariant, we can discuss the physical or true degrees of freedom, not just the freely specifiable data. Second, we could parametrize the gauge invariant content of a general linearized solution in terms of 4 functions E (s) and B (s) on I + that capture these true (phase space) degrees of freedom. Finally, we showed that the additional condition at I + sets B (s) = 0. Therefore, in the linear approximation one sees explicitly that this condition cuts the true degrees of freedom in gravitational waves exactly by half. Furthermore, the gravitational waves that do satisfy this condition carry no energy-momentum or angular momentum! Thus, although this strategy of gaining control over the nonlinear theory seems plausible at first, it is simply not viable. By isolating the true degrees of freedom at I + , it should be possible to show that this sharper results holds also in full general relativity with positive Λ. Second, we found that the conserved energy has a peculiar feature: For matter fields as well as linearized gravity, energy H T defined by the time translation T a can have either sign and, furthermore, is unbounded below. Thus, there exist both electromagnetic and gravitational waves on de Sitter space-time which carry arbitrarily negative energy, no matter how small the positive Λ is! This is in striking contrast with the Λ = 0 situation, where the corresponding waves carry strictly positive energyH t in Minkowski space-times. How can one reconcile this strong contrast? What happens to the infinitely many solutions with large negative energy in the limit Λ → 0?
To analyze this issue, let us first recall that, to take this limit, one has to use the differential structure induced by the coordinates ( x, t). In this chart, the cosmological horizons which bound region I lie at r 2 = (3/Λ)e −2Ht (where r 2 = x · x). Therefore, in the limit Λ → 0, region I in which T a is time-like fills out the whole Minkowski space. This is the geometric reason why even though H T is unbounded below no matter how small the positive Λ is, the limitingH t is strictly positive. In the phase space language, as Λ changes, the covariant phase space Γ (Λ) Cov , on which the Hamiltonian H T are defined, itself changes. In the limit, the set of solutions h ab on which H T is negative simply disappears! To summarize, as we showed explicitly in section IV B 1, there are families of metric perturbations γ ab (Λ) that satisfy the gauge conditions (3.2) and field equations (3.3) for each Λ, and admit well-defined limitsh ab as Λ → 0 satisfying the standard gauge conditions and field equations (4.13) in Minkowski space-time. This limiting procedure is onto: the limitsh ab span the entire phase spaceΓ cov of metric perturbations in Minkowski space. Furthermore along any of these families, the energy H T | γ(Λ) tends to the energyH T |h of the limiting perturbation in Minkowski space. Nonetheless, the lower bound of the energy function on phase spaces Γ (Λ) Cov is discontinuous in the limit: It equals −∞ for every Λ, however small, but vanishes for Λ = 0.
Even though we do recover positivity of energy in the limit Λ → 0, we are left with a conundrum because there is strong evidence that Λ is small but non-zero in our universe: Can realistic gravitational waves have arbitrarily large negative energy in de Sitter space-time or, in the nonlinear context, in asymptotically de Sitter space-times? To probe this issue let us first analyze in some detail the origin of negative energy. Let us begin with Maxwell fields in de Sitter space-time. The stress-energy satisfies the dominant energy condition and the Killing field T a is future pointing on the part of E + (i − ) that lies in region I and past pointing on the part that lies in region II (see the left panel in Fig. 2). Therefore, the energy flux across E + (i − ) into region I is positive but that into region II is negative. It is because of this negative flux into region II that the total energy can be negative. Therefore, if the Maxwell field under consideration vanished on the part of the horizon E + (i − ) that lies in region II, the energy of those electromagnetic waves would be necessarily positive. For gravitational waves, we do not have a stress-energy tensor. However, using the fact that the Killing field T a is future directed and time-like in region I, it is easy to show that, if the initial data on any cosmological slice Σ were restricted to lie entirely in the intersection of Σ with region I, the energy (4.21) of that cosmological perturbation is necessarily positive. 5 In the limit η → −∞, the cosmological slice tends to E + (i − ). Therefore, it again follows that the conserved energy flux at I + can be negative only because there is a negative energy flux into the Poincaré patch across the part of E + (i − ) that lies in region II. But in realistic situations gravitational waves from isolated systems would be generated entirely by a time 5 This is most easily seen by using the symplectic form in (4.1) to rewrite the energy as follows: d 3 x (r m − η m ) r n − η n + s mn + 2(1 + r η )r m η n ∇ m h ab∇n h cdq acqbd (5.1) whereq abr arb = 1 and s mn =q mn −r mrn . If the initial data is restricted to the intersection of Σ with region I we have |r/η| < 1 and, consequently, H T is necessarily positive.
changing quadrupole moment (depicted in the right panel of Fig. 2), whence there would be no incoming flux across E + (i − ) at all. The flux across I + would just equal that across the future horizon E − (i + ) that separates regions I and II. Since the Killing field T a is null and future directed on this horizon, this flux has to be positive. Indeed we will show this explicitly in [11]. Thus, in terms of fields at I + , while general initial data can have arbitrarily large negative energies, the initial data induced by gravitational waves produced by realistic sources is appropriately constrained for the energy flux across I + to be positive. In the linearized case, it appears to be rather straightforward to make these constraints explicit [15]. An interesting challenge in full nonlinear general relativity is to find the analogous constraints on fields at I + induced by gravitational waves produced by realistic sources, in absence of incoming radiation [10] (at least from the portion of the event horizon E + (i − ) that lies to the future of the cross-over surface C). With these constraints at hand, one could hope to show that fluxes of energy carried by gravitational waves produced in physically realistic processes would be positive in full nonlinear general relativity with Λ > 0, as one physically expects. Finally, note that our entire analysis -and in particular the limit (4.18) to Minkowski space-was carried out by restricting ourselves to the future Poincaré patch of de Sitter space. As discussed in section II A, in the description of isolated systems, this restriction is motivated by direct physical considerations. However, one may still ask if the results can be extended to full de Sitter space-time. The explicit form of the solutions we presented is indeed restricted to the future Poincaré patch because of the heavy use of the cosmological slicing. But each of these solutions admits a well-defined extension to full de Sitter spacetime simply because every solution in our covariant phase space Γ Cov induces a well-defined initial data on the de Sitter Cauchy surfaces. For these extended solutions, our main results also hold on I − . The central formula (4.9) holds for all ten Killing fields K a of de Sitter space-time in this extension.
These constructions and results provide further guidance for the development of the gravitational radiation theory in full nonlinear general relativity with Λ > 0. We will conclude this discussion with two examples.
Consider first the problem of defining the 2-sphere energy-momentum and angular momentum charge integrals, analogous to the Bondi 4-momentum in the Λ = 0 case. For a given prescription for selecting asymptotic symmetries, considerations involving field equations and geometry of I + (discussed in section 5 of [5]) suggest a natural, candidate expression for these charges for Λ > 0. The difference between these integrals evaluated on any two 2-spheres on I + provides a candidate expression of fluxes in the full theory across the region of I + bounded by these 2-spheres. One can show that their linearization provides precisely the flux formulas (4.11) and (4.22) at I + , derived using completely independent Hamiltonian methods. This result provides a powerful hint for the charge integrals in the full nonlinear theory. The remaining open issue is the selection of appropriate asymptotic asymptotic symmetries, without assuming conformal flatness of the intrinsic geometry of I + (which, as we showed, trivializes the situation by forcing all fluxes to vanish).
A second issue in the full theory is the following. While observations strongly suggest that Λ is positive in our universe, almost all analytical calculations and numerical simulations in gravitational wave science set Λ to zero and work in the asymptotically Minkowskian context. (For notable exceptions, see [16][17][18].) Since the actual value of Λ is so small compared to the scales involved, say, in binary coalescences of astrophysical interest, it is natural to assume that setting Λ to zero is an excellent approximation. However, it is not completely clear that this is true for two reasons. First, as we pointed out, the limit Λ → 0 is discontinuous in important respects. Second, advanced LIGO will be eventually capable of detecting gravitational waves from sources that are ∼ 1 Gpc away, a distance that is approximately 20% of the cosmological radius. Therefore, apart from the intrinsic conceptual interest, it is important to be able to reliably calculate the 'errors' one makes by setting Λ to zero. 6 The details of the discussion of the Λ → 0 limit presented in this paper will help significantly in streamlining these calculations. | 2015-07-19T13:26:10.000Z | 2015-06-19T00:00:00.000 | {
"year": 2015,
"sha1": "27303e83513ea89493eb9841901b56d0e01717db",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.92.044011",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "1d82ea3e1ab415935bcf332377f06216f4e86214",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245816842 | pes2o/s2orc | v3-fos-license | Low-Frequency Bandgaps of the Lightweight Single-Phase Acoustic Metamaterials with Locally Resonant Archimedean Spirals
In order to achieve the dual needs of single-phase vibration reduction and lightweight, a square honeycomb acoustic metamaterials with local resonant Archimedean spirals (SHAMLRAS) is proposed. The independent geometry parameters of SHAMLRAS structures are acquired by changing the spiral control equation. The mechanism of low-frequency bandgap generation and the directional attenuation mechanism of in-plane elastic waves are both explored through mode shapes, dispersion surfaces, and group velocities. Meanwhile, the effect of the spiral arrangement and the adjustment of the equation parameters on the width and position of the low-frequency bandgap are discussed separately. In addition, a rational period design of the SHAMLRAS plate structure is used to analyze the filtering performance with transmission loss experiments and numerical simulations. The results show that the design of acoustic metamaterials with multiple Archimedean spirals has good local resonance properties, and forms multiple low-frequency bandgaps below 500 Hz by reasonable parameter control. The spectrograms calculated from the excitation and response data of acceleration sensors are found to be in good agreement with the band structure. The work provides effective design ideas and a low-cost solution for low-frequency noise and vibration control in the aeronautic and astronautic industries.
Introducing a local resonant mass block is currently one of the popular ways to achieve a low-frequency bandgap below 500 Hz for LPHM. Liu et al. [23] used a simple cubic lattice form to constitute a local resonance unit using a high-density mass block wrapped in soft rubber material. The artificial periodic structure is formed by periodically arranged local resonance units in an elastic medium, which successfully exploits the local resonance effect of elastic waves to achieve a bandgap of around 400 Hz in a 20 mm cubic lattice. In contrast, traditional local resonance structures typically allow for a very narrow resonance bandgap at low frequencies [24]. For the purpose of obtaining a wide resonant bandgap, Dong et al. [25] proposed a new multiple bandgap phononic crystals capable of generating lots of flat bandgaps in the low-frequency band, however, the flat bandgaps generated by this unique local resonant phononic crystal are still all above 500 Hz. To address the above problem, Li et al. [26] proposed a phonon crystal structure in the form of a cylinder with a periodic arrangement attached to a thin plate. This cylinder structure attached to a thin plate corresponds to the formation of a "solid-state Helmholtz resonator" and succeeds in generating a large number of broadband gaps in the range 256 Hz-855 Hz. In addition, Ning et al. [27] similarly designed a tunable metamaterial consisting of a frame structure, airbag, and a counterweight by introducing a local resonant mass block. They modulated the designed acoustic metamaterials by the gauge pressure and gas temperature inside the airbag, effectively attenuating wave propagation in the 13 Hz to 90 Hz range. Although the low-frequency bandgap of LPHM below 500 Hz can be obtained by introducing local resonance mass blocks in the above-mentioned references, the lightweight efficiency of LPHM has severely deteriorated. On the other hand, the discontinuous distribution of materials with different properties in the complex spatial structure can bring great challenges to the fabrication [28,29].
To endow the LPHM with the simultaneous advantages of lightweight efficiency and low-frequency bandgaps, the single-phase lightweight periodic honeycomb materials with subwavelength bandgaps have attracted much attention in the research community [18,[30][31][32][33][34]. Chen et al. [35,36] designed a star-assisted metamaterial with a low-frequency bandgap and double-negative characteristics in a certain frequency range using single-phase materials, which solved the problem that conventional double-negative acoustic metamaterials are difficult to be applied due to their complex structure and multi-phase material composition. The novel lightweight bidirectional re-entrant lattice metamaterials were proposed by Ren et al. [37], which forms a wide bandgap of about 2 kHz in the range of 2.7 kHz to 4.7 kHz. The single-phase acoustic metamaterials with periodically arranged diverging star-shaped cells resulting in a "low frequency" bandgap from 1.44 kHz to 1.56 kHz were proposed by Kumar and Pal [38]. In the above-mentioned, single-phase LPHM, although the simultaneous advantages of lightweight efficiency and low-frequency bandgaps are realized, the low-frequency bandgaps are all larger than 500 Hz.
Aiming at the challenging problem of vibration and noise reduction below 500 Hz for the single-phase LPHM, a new kind of local resonance single-phase lightweight periodic honeycomb materials, the square honeycomb acoustic metamaterials with locally resonant Archimedean spirals (SHAMLRAS) are proposed in this paper. The SHAMLRAS consists of Archimedean spirals of the same material combined in a square honeycomb structure. These acoustic metamaterials with the introduction of Archimedean spirals have special resonance characteristics compared to existing single-phase acoustic metamaterials and have the capacity to form multiple bandgaps below 500 Hz (down to approximately 184.5 Hz) within a lattice size of 40 mm. The research has the potential to provide an invaluable guide to the engineering suppression of low-frequency noise and vibration.
The paper is organized into six sections including the introduction above. The second part describes the geometry of SHAMLRAS and illustrates the tools for wave propagation studies. The low-frequency bandgap characteristics and directional attenuation properties of elastic waves in the SHAMLRAS periodic structure are thoroughly discussed in Section 3. For more flexibility in obtaining the low-frequency bandgap width we need, the effect of the material parameters, the arrangement of spirals, and the variables of parameter control equations on the bandgap width and position being investigated in Section 4. The transmission loss experiments and COMSOL verification of the low-frequency bandgap are demonstrated in Section 5. Finally, the major achievements of this work are presented in Section 6.
Mechanical Design
In this study, a low-frequency single-phase acoustic metamaterials consisting of multiple spirals is proposed, based on a square honeycomb structure and a circular array of four spirals. The structural composition and lattice arrangement of acoustic metamaterials are shown in Figure 1a-c, with a lattice constant of L = 40 mm and a ligament thickness of p = 1 mm. We can effectively reduce the bandgap frequency by tuning the parameters of the spiral line equation. are demonstrated in Section 5. Finally, the major achievements of this work are presented in Section 6.
Mechanical Design
In this study, a low-frequency single-phase acoustic metamaterials consisting of multiple spirals is proposed, based on a square honeycomb structure and a circular array of four spirals. The structural composition and lattice arrangement of acoustic metamaterials are shown in Figure 1a-c, with a lattice constant of L = 40 mm and a ligament thickness of p = 1 mm. We can effectively reduce the bandgap frequency by tuning the parameters of the spiral line equation. The spiral line control equation is derived from the Archimedean spiral line and is derived as follows: where θ is the pole angle, α is the pole diameter when θ = 0, i.e., the initial radius, and β is the increase (or decrease) of the pole diameter per 1 rad of rotation.
Suppose R 2 is the inner diameter, n is the turn number, and d is the circle distance, then this polar equation can be written in the following form: Therefore, the corresponding Cartesian coordinate equation for the Archimedean spiral plane is shown below, which is the parametric control equation for the spiral in this paper.
FE Modeling for the Free Wave Propagation
The unit cell and Brillouin zone of periodic lattices are shown in Figure 1d. In the unit, e i (i = 1, 2) is the basic lattice vector, which can be expressed by orthogonal Cartesian basic vector and lattice constant as: A reciprocal lattice space is defined according to the direct lattice space. In general terms, the relationship between the basis vectors of the direct and reciprocal lattices is satisfied as follows: where e i denote the basis vectors of the direct lattice, e * j denote the basis of reciprocal lattice, and the subscripts i and j take the integer value 1 and 2. The δ ij is the Kronecker delta function, and the expressions are as follows: The coordinate positions of the lattice points in the reciprocal lattice can be represented by the reciprocal lattice vector G, which is a linear combination of the reciprocal lattice basis vectors: where n 1 and n 2 are integers, and r o is the displacement of Point O. Additionally, the reciprocal lattice vectors of the SHAMLRAS can be expressed as: Thus, the entire SHAMLRAS periodic structure can be constructed by shifting the unit cell along the basic lattice vector (e 1 ,e 2 ) in a two-dimensional space. The Brillouin zone of the basic lattice can also be obtained, as shown in Figure 1d, in which the black shadow region is the irreducible Brillouin zone (IBZ). The coordinates of the boundary points of the IBZ are shown in Table 1. According to Bloch's theorem, the part u(r) of the eigenwave field volume associated with the spatial position r can be expressed in the form of a spatial plane wave u(r) = U k (r)e −ik·r (9) where r = (x, y, z) is the position vector, k = (k x , k y , k z ) is the wave vector in the first Brillouin zone, i is the imaginary unit, and U k (r) is eigenwave amplitude. The boundary displacement of the periodic structure can be controlled by Bloch boundary conditions (Floquet periodicity), so there is where r is the position vector of the node on the boundary, and R is the basis vector of the lattice. Determine the form function according to the general finite element procedure and establish the stiffness and mass matrices within the unit to obtain the generalized eigenequations of the unit: where k s is the unit stiffness matrix, M s is the mass matrix, u(ν) and f are the vectors of generalized nodal displacements and forces. The SHAMLRAS periodic structure can be reduced to a series of cells using Equation (10) for the whole calculation. Two edges in xand y-directions are selected as source boundaries in COMSOL Multiphysics 5.5, and the two edges selected for the destination boundary correspond to the two edges of the source boundary. The Floquet periodicity conditions at the corresponding boundaries of the periodicity cell are expressed as: where u is a vector of dependent variables, and the vector k f represents the spatial periodicity of the excitation. The wave vector k is parametrically swept along the path O → A → B → O according to the first Brillouin zone of the SHAMLRAS structure given in Figure 1d. With Equations (11) and (12), the structural eigenfrequencies can be found for a given k. Thus, the dispersion curve is finally plotted with the wave vector k as the horizontal coordinate and the eigenfrequency as the vertical coordinate to obtain the energy band diagram of the SHAMLRAS, and the forbidden band between the dispersion curves is the band structure. Moreover, the eigenfrequencies on the dispersion branches all correspond to the mode shape the structure has. The complete surface ω = ω(k) is the dispersion surface mentioned in Section 3.2. In this case, the corresponding dispersion surfaces of the different dispersion branches represent a class of modes at the corresponding eigenfrequencies. Hence, the n-th (n = 1, 2, 3, . . .) dispersion surface can also be called the n-th (n = 1, 2, 3, . . .) mode. In this case, there are as many orders of eigenfrequency as there are dispersion surfaces.
Bandgap Characteristics
In this section, the in-plane dynamical properties of SHAMLRAS based on Figure 1 are investigated. The geometrical parameters of the SHMLRAS and the parameters of the photosensitive resin used are shown in Table 2. Based on the material and geometrical parameters provided, the calculated bandgap structure of the SHMLRAS in the low-frequency range is shown in Figure 2a, where the pink shaded area is the bandgap in the calculated frequency range. For ease of discussion and analysis, a detailed schematic of the first bandgap is shown in Figure 2b. Clearly, there are a total of five complete bandgaps in the 0-500 Hz range. These five bandgaps are located between the third and fourth dispersion curves, between the sixth and seventh dispersion curves, between the eighth and ninth dispersion curves, between the tenth and eleventh dispersion curves, and between the twelfth and thirteenth dispersion curves respectively. The application of point group theory, which is used extensively in physics an lecular chemistry, to analyze and predict the electromagnetic response of complex tures is a highly rewarding endeavor [39]. However, the mode analysis approach u these works [38,40] more easily helps us to understand the mechanism of bandgap g ation. The vibrational modes at Points O, A, and B are analyzed below for the upper ( circles) and lower (red circles) dispersion curves corresponding to the former bandgaps in Figure 2a. The size and direction of the arrows in the mode shape diag represent the displacement distance and direction of the structure relative to its or position. Figure 3 shows the mode shapes at Points O, A, and B for the upper and boundaries of the first bandgap respectively. It can be seen that the displacement a rection of the mode shapes at the upper and lower boundaries of the fourth (1st u and third (1st lower) dispersion curves corresponding to the Point O are very sim the second and fourth quadrants. In the first and third quadrants, however, the dire of the modes are opposite. At Point A, we can notice a clear exchange of energy structural vibrations, with the spiral vibrations in the second and fourth quadran coming the spiral vibrations in the first and third quadrants. In the A B → dire attenuation occurs in the vibration modes corresponding to the fourth and third d sion curves. Moreover, the 1st up vibration in the second and fourth quadrants d significantly, which leads to the spiral vibration in the first three quadrants. Comp Figure 2b with the previous discussion, we can see that the energy exchange occurr The application of point group theory, which is used extensively in physics and molecular chemistry, to analyze and predict the electromagnetic response of complex structures is a highly rewarding endeavor [39]. However, the mode analysis approach used in these works [38,40] more easily helps us to understand the mechanism of bandgap generation. The vibrational modes at Points O, A, and B are analyzed below for the upper (green circles) and lower (red circles) dispersion curves corresponding to the former three bandgaps in Figure 2a. The size and direction of the arrows in the mode shape diagrams represent the displacement distance and direction of the structure relative to its original position. Figure 3 shows the mode shapes at Points O, A, and B for the upper and lower boundaries of the first bandgap respectively. It can be seen that the displacement and direction of the mode shapes at the upper and lower boundaries of the fourth (1st upper) and third (1st lower) dispersion curves corresponding to the Point O are very similar in the second and fourth quadrants. In the first and third quadrants, however, the directions of the modes are opposite. At Point A, we can notice a clear exchange of energy in the structural vibrations, with the spiral vibrations in the second and fourth quadrants becoming the spiral vibrations in the first and third quadrants. In the A → B direction, attenuation occurs in the vibration modes corresponding to the fourth and third dispersion curves.
Moreover, the 1st up vibration in the second and fourth quadrants decays significantly, which leads to the spiral vibration in the first three quadrants. Comparing Figure 2b with the previous discussion, we can see that the energy exchange occurring at Point A is the main reason for the creation of the first bandgap. Figure 4a-f shows the mode shape diagrams corresponding to the sixth (2nd lower) and ninth (3rd upper) dispersion branches at the Points O, A, and B. The marked arrows show that there is a clear difference in the direction of the spiral vibrations in the different quadrants, i.e., a greater degree of local resonance. For example, at Point B, the sixth dispersion branch tends to vibrate downwards in the first quadrant, leftwards in the second quadrant, upwards in the third quadrant, and to the right in the fourth quadrant. The ninth dispersion branch tends to vibrate in the lower left, lower right, upper right, and upper left in the four quadrants respectively. The effect of the elastic wave on the structure is weakened or completely counteracted by the effect of the local resonance spirals on the structure, i.e., there is a positive correlation between the local resonance stratification of the spiral and the attenuation performance of the elastic wave. Figure 4a-f shows the mode shape diagrams corresponding to the sixth (2nd lower) and ninth (3rd upper) dispersion branches at the Points O, A, and B. The marked arrows show that there is a clear difference in the direction of the spiral vibrations in the different quadrants, i.e., a greater degree of local resonance. For example, at Point B, the sixth dispersion branch tends to vibrate downwards in the first quadrant, leftwards in the second quadrant, upwards in the third quadrant, and to the right in the fourth quadrant. The ninth dispersion branch tends to vibrate in the lower left, lower right, upper right, and upper left in the four quadrants respectively. The effect of the elastic wave on the structure is weakened or completely counteracted by the effect of the local resonance spirals on the structure, i.e., there is a positive correlation between the local resonance stratification of the spiral and the attenuation performance of the elastic wave.
The local resonant mode of the unit structure is excited as the frequency approaches the natural frequency of the resonant. At this point, the elastic waves in the structure will be strongly coupled with the structural local resonant mode. Furthermore, the energy is localized due to the constant exchange into the resonant unit and the elastic wave will not propagate further. In the band structure, this is manifested by the truncation of the energy band starting at Point O by the resonant straight band, resulting in the formation of the third and fourth bandgaps. The vibration of the spirals on the seventh (2nd upper) dispersion branch is essentially constant in the third quadrant, as shown in Figure 4g The local resonant mode of the unit structure is excited as the frequency approaches the natural frequency of the resonant. At this point, the elastic waves in the structure will be strongly coupled with the structural local resonant mode. Furthermore, the energy is localized due to the constant exchange into the resonant unit and the elastic wave will not propagate further. In the band structure, this is manifested by the truncation of the energy band starting at Point O by the resonant straight band, resulting in the formation of the Comparing Figures 3 and 4, the consumption of elastic wave energy by the structure in the calculated frequency range originates from the local resonance of the spiral. Furthermore, the degree of local resonance of the spiral increases, the better performance of designed structure for elastic wave attenuation.
Dispersion Surfaces
In this paper, the dispersion surfaces of the SHAMLRAS periodic structure are calculated using the finite element method in the solid mechanics interface of COMSOL. The four edges of the square honeycomb metamaterials cell are divided by taking mesh = 40, i.e., the four edges of the first Brillouin zone in Figure 1d are divided into 1600 points. Once we have swept the wave vector k for all the points inside and on the boundary of the first Brillouin zone, the complete eigenfrequency surface ω = ω(k) corresponding to the different dispersion branches can be calculated using COMSOL. In this case, a complete eigenfrequency surface ω = ω(k) is also a mode. The corresponding dispersion surfaces of the different dispersion branches represent a class of modes at the corresponding eigenfrequencies. Hence, the n-th (n = 1, 2, 3, . . .) dispersion surface can also be called the n-th (n = 1, 2, 3, . . .) mode. Here we select modes near the first three bandgaps (third to ninth dispersion branches) for further discussion and analysis.
The dispersion surface method can be used to show the elastic wave propagation properties in the periodic structure in a three-dimensional and visual way. Figure 5a shows the dispersion relations of the seven dispersion surfaces of the third to ninth dispersion branches of the SHAMLRAS periodic structure in three-dimensional space, where the xand y-axis coordinates represent the coordinate of the first Brillouin zone point and the z-axis represents the frequency f. The comparative relationship between the SHAMLRAS dispersion relation and the bandgap structure in the horizontal view is shown in Figure 5b. There is a good correspondence between the dispersion curves of the SHAMLRAS structure and the dispersion curves of the bandgap structure, as well as between the bandgap positions of the two diagrams. Simultaneously, it proves the correctness of the dispersion relation calculated by the above method.
Directional Propagation Property of Elastic Waves.
We can obtain the ISO-frequency contours corresponding to the modes by projecting the dispersion surfaces in the x k and y k plane in Figure 5a and extracting the eigenfrequencies according to different division intervals. The group velocity of two-dimensional periodic structures along the x-and y-direc-
Directional Propagation Property of Elastic Waves
We can obtain the ISO-frequency contours corresponding to the modes by projecting the dispersion surfaces in the k x and k y plane in Figure 5a and extracting the eigenfrequencies according to different division intervals.
The group velocity of two-dimensional periodic structures along the xand y-directions for a given frequency can be written as: where a x and a y denote the lattice constants in the xand y-directions respectively. The group velocity is defined as the gradient of an ISO-frequency curve when the dispersion surface is described by a two-dimensional contour. At the same time, the direction of the outer normal at each point on the contour is the direction of the group velocity at that point, which represents the direction of energy propagation of the vibration at that frequency. Therefore, the group velocity can be used to express the speed of energy propagation as well as the direction and magnitude of the elastic wave propagation. With the calculation of the gradient at each point on the contour, we can obtain the direction of propagation and the region of propagation for a given frequency of the vibration. The values of c gx and c gy are used as the xand y-coordinates to obtain the group velocity for a specific frequency.
With increasing frequencies, the SHAMLRAS structure vibration propagation is mostly concentrated in the k x and k y directions, as shown in Figures 6-12. At the same time, the vibration propagation generally behaves as follows: it is enhanced, then weakened, and then enhanced again. In particular, the sixth mode case has the weakest vibration propagation in the k x and k y directions. On the other hand, we can observe from Figures 6a, 7a, 8a, 9a, 10a, 11a and 12a that there is weak anisotropy in the k x and k y directions for the fifth and sixth modes at the lower frequencies of the dispersive surface, whereas there are strong anisotropy in this direction for the fourth, seventh and eighth modes. Furthermore, only the fifth and ninth modes have strong anisotropy in the direction with the diagonal as the axis of symmetry. The group velocity is defined as the gradient of an ISO-frequency curve when the dispersion surface is described by a two-dimensional contour. At the same time, the direction of the outer normal at each point on the contour is the direction of the group velocity at that point, which represents the direction of energy propagation of the vibration at that frequency. Therefore, the group velocity can be used to express the speed of energy propagation as well as the direction and magnitude of the elastic wave propagation. To be more precise in evaluating the propagation of elastic waves on different persion surfaces, the ISO-frequency contours for the three frequency cases are marke (b) of Figures 6-12 with heavy black solid lines, and the group velocities correspondin these ISO-frequency contours are shown in (c,d) of Figures 6-12 respectively. The gr To be more precise in evaluating the propagation of elastic waves on different persion surfaces, the ISO-frequency contours for the three frequency cases are marke (b) of Figures 6-12 with heavy black solid lines, and the group velocities correspondin these ISO-frequency contours are shown in (c,d) of Figures 6-12 respectively. The gr To be more precise in evaluating the propagation of elastic waves on different dispersion surfaces, the ISO-frequency contours for the three frequency cases are marked in (b) of Figures 6-12 with heavy black solid lines, and the group velocities corresponding to these ISO-frequency contours are shown in (c,d) of Figures 6-12 respectively. The group velocity in Figure 6c-e shows a distribution of points in the 0 • to 360 • direction. The points on the outside of the group velocity are more densely distributed in the k x and k y directions as the frequency increases, however, the group velocity points in this direction disappear with the frequency increases to f = 184.2 Hz. In contrast, with increasing frequency f the group velocity distribution is more densely distributed in the region near the diagonal as the axis of symmetry.
The clusters of velocities in the k x and k y directions increase remarkably with increasing frequency, and the points in the other directions decrease significantly, according to (c,d) of Figures 9, 10 and 12. However, a highly visible concentration of group velocity points in the diagonal direction of the graph is observed in Figure 6c,d as well as in Figure 11c,d when the frequencies are located in the fourth and eighth modes, and the degree of the clusters increases noticeably with increasing frequency. That demonstrates that there is a strong energy aggregation at this location, and it also indicates that the propagation of energy in the k x and k y direction is much weaker in this direction compared to the diagonal direction due to the small group velocity distribution. Consequently, we can consider that elastic waves form vibration blind zones in the k x and k y directions, and this characteristic is very useful for vibration isolation needs in specific directions of engineering.
Influence of Spiral Geometry on the Bandgap
From the analysis in Section 2.1, the structure of SHAMLRAS changes markedly when the inner diameter (R 2 ), turn number (n), circle distance (d), the tart parameters (t 0 ), and the End parameters (t 1 ) of parameter t are changed in Equation (5). The changes in these parameters are further influenced by the bandgap characteristics of the SHAMLRAS structure and the dispersion relations. To be more flexible in dealing with different frequencies of elastic waves in engineering damping applications, the influence of the spiral geometry parameters on the bandgap width and position is discussed below.
The influence of the circle distance d and the inner diameter R 2 on the bandgap width of the SHAMLRAS structure are shown in Figure 13a-d respectively. With increasing circle distance d in the range of 1000 Hz, the second bandgap width is found to show an increase followed by a decrease. Although the bandgap width of the SHAMLRAS structure decreases as the circle distance d increases, the frequency of each bandgap also decreases remarkably. Furthermore, a comparison of Figure 13a,b shows that the first bandgap is generated between the third and fourth dispersion curves when d = 2.75, 3.25, 3.75, 4.25. The second to fifth bandgaps then corresponds to between the fifth and sixth dispersion curves, between the seventh and eighth dispersion curves, between the ninth and tenth dispersion curves, and between the eleventh and twelfth dispersion curves respectively. However, in the case of d = 2.25, no bandgap can be observed to be created between the third and fourth dispersion curves, with the first bandgap located between the sixth and seventh dispersion curves. As the inner diameter R 2 is varied, the width of the first three bandgaps is in Figure 13c is negatively correlated with the change in R 2 , while only the width of the fifth bandgap is positively correlated with the change in R 2 , with only the fourth bandgap showing an increase followed by a decrease. Moreover, the position of the dispersion curves for the first five bandgaps of the SHAMLRAS structure has not changed by comparing Figure 13a,b. From the above discussion, there is a clear indication that we should increase the value of the circle distance d as well as R 2 in the SHAMLRAS structure within a limited dimension when we need a low-frequency bandgap.
The dependence of the bandgap width on turn number n, starting value t 0 , and end value t 1 is shown in Figure 14a-f respectively. All the shapes of the SHAMLRAS structures are noted to change more markedly with the variation of parameters. The variation of these parameters also leads to changes in the position of the bandgap of the SHAMLRAS structure between the different dispersion curves, especially in the lowfrequency bandgap region. There is a narrower bandgap that is consistent between the second and third dispersion curves as the values of these three parameters gradually approach the structural parameters selected in Table 2. As the selected parameter change to the structural parameters shown in Table 2, the bandgap tends to decrease at all frequency positions, in particular, the frequency values between the second and third dispersion curves overlap, and the bandgap at this position disappears. We also observe an interesting phenomenon: there is a striking similarity in the effect of the turn number n in Figure 14a,b and the end value t 1 in Figure 14e,f on the SHAMLRAS geometry and band structure. This suggests a strong one-to-one correspondence between the turn number n and the end value t 1 for the SHAMLRAS structure. On the other hand, variations of the three parameters in Figure 14 have a relatively high effect on the bandgap width of the SHAMLRAS structure. Although the variation in bandgap width at this point does not have the same continuity of variation as the bandgap width shown in Figure 13, the position of the dispersion curve is the same for each bandgap, except for the structural parameters in Table 2. The dependence of the bandgap width on turn number n, starting value t0, value t1 is shown in Figure 14a-f respectively. All the shapes of the SHAMLRAS st are noted to change more markedly with the variation of parameters. The vari these parameters also leads to changes in the position of the bandgap of the SHA structure between the different dispersion curves, especially in the low-fr bandgap region. There is a narrower bandgap that is consistent between the sec third dispersion curves as the values of these three parameters gradually appro structural parameters selected in Table 2. As the selected parameter change to th tural parameters shown in Table 2, the bandgap tends to decrease at all frequen tions, in particular, the frequency values between the second and third dispersion overlap, and the bandgap at this position disappears. We also observe an interest nomenon: there is a striking similarity in the effect of the turn number n in Figu and the end value t1 in Figure 14e,f on the SHAMLRAS geometry and band structu suggests a strong one-to-one correspondence between the turn number n and value t1 for the SHAMLRAS structure. On the other hand, variations of the three ters in Figure 14 have a relatively high effect on the bandgap width of the SHA structure. Although the variation in bandgap width at this point does not have t continuity of variation as the bandgap width shown in Figure 13, the position of persion curve is the same for each bandgap, except for the structural parameters 2.
In the above case, the dispersion curves of the second to fifth bandgaps of th ent SHAMLRAS structures are located between the third and fourth dispersion In the above case, the dispersion curves of the second to fifth bandgaps of the different SHAMLRAS structures are located between the third and fourth dispersion curves, between the sixth and seventh dispersion curves, between the eighth and ninth dispersion curves, and between the tenth and eleventh dispersion curves respectively. This indicates that the relationship between the position of the bandgap and dispersion curves is relatively stable for the range of parameters chosen.
Influence of Spiral Arrangement on the Bandgap
The effect on bandgap width and position of spirals in a square honeycomb structure is explored here. As shown in Figure 15, the bandgap width distribution is calculated for each case by varying the arrangement of the spirals in different quadrants. When the spiral arrangement is varied in accordance with Figure 15, there are decreases to disappear of the bandgap between the third and fourth dispersion curves, the bandgap between the sixth and seventh dispersion curves, and the bandgap between the eighth and ninth dispersion curves when the spiral arrangement is transformed in the first and third quadrants. The different arrangement of spirals demonstrates that only the first three bandgaps of structural parameters shown in Table 2 are more significantly affected.
Influence of Spiral Arrangement on the Bandgap
The effect on bandgap width and position of spirals in a square honeycomb structure is explored here. As shown in Figure 15, the bandgap width distribution is calculated for each case by varying the arrangement of the spirals in different quadrants. When the spiral arrangement is varied in accordance with Figure 15, there are decreases to disappear of the bandgap between the third and fourth dispersion curves, the bandgap between the sixth and seventh dispersion curves, and the bandgap between the eighth and ninth dispersion curves when the spiral arrangement is transformed in the first and third quadrants. The different arrangement of spirals demonstrates that only the first three bandgaps of structural parameters shown in Table 2 are more significantly affected. Table 2, the number "I" represents a transformation of the spiral arrangement in the first quadrant, the number "I&II" represents a transformation of the spiral arrangement in the first and second quadrants, and the numbers "I&III" represents a transformation of the spiral arrangement in the first and third quadrants.
Influence of Material Parameters on the Bandgap
The influence of the material parameters Young's modulus E, Poisson's ratio ν, and density ρ (only one of these parameters is changed at a time) on the width and position o the bandgap in the numerical simulation are illustrated in Figure 16. The frequency posi tion of each bandgap rises gradually with increasing Young's modulus E and Poisson's ratio ν as shown in Figure 16a,c. Simultaneously, the width of the five bandgaps shown in Figure 15. The dependence of the bandgap width on the spiral (a,b) transforming quadrants, where the number "original" represents the bandgap distribution for the parameter shown in Table 2, the number "I" represents a transformation of the spiral arrangement in the first quadrant, the numbers "I&II" represents a transformation of the spiral arrangement in the first and second quadrants, and the numbers "I&III" represents a transformation of the spiral arrangement in the first and third quadrants.
Influence of Material Parameters on the Bandgap
The influence of the material parameters Young's modulus E, Poisson's ratio ν, and density ρ (only one of these parameters is changed at a time) on the width and position of the bandgap in the numerical simulation are illustrated in Figure 16. The frequency position of each bandgap rises gradually with increasing Young's modulus E and Poisson's ratio ν as shown in Figure 16a,c. Simultaneously, the width of the five bandgaps shown in the diagram increases. In contrast, both the width and position of the bandgap in Figure 16e show a negative correlation with the variation in density ρ. Additionally, it can be observed that higher frequency bandgaps are more sensitive to the change of material parameters. In other words, the higher the bandgap frequency, the faster the bandgap width increases (or decreases) with the change of material parameters. On the other hand, the relative positions of each bandgap have not changed in the dispersion curves. It is demonstrated that changes in Young's modulus E, Poisson's ratio ν, and density ρ during the simulations do not open a new bandgap (or disappear the existing bandgap) in the band structure of SHAMLRAS. As the analysis above shows, the smaller the values of Young's modulus E and Poisson's ratio ρ of the material, the lower the frequency of the bandgap. At the same time, it is easier to obtain bandgaps at lower frequencies with higher values of the density ρ.
Filtering Properties of the Finite Periodic Structure of the SHAMLRAS
The dispersion relations and free wave propagation properties in the case of SHAM-LRAS infinity structures have been analyzed in the previous sections, but they may not be adaptable to the demands of flexible load-bearing in engineering. In this section, the performance of SHAMLRAS for vibration isolation finite structures is investigated from both experimental tests and numerical simulations. The geometry and loading environment in Figure 17a were used for both the transmission loss experiments and the COM-SOL simulations. To evaluate the vibration isolation performance of the SHAMLRAS structure, the frequency response function (FRF) was calculated with the left-hand panels
Filtering Properties of the Finite Periodic Structure of the SHAMLRAS
The dispersion relations and free wave propagation properties in the case of SHAMLRAS infinity structures have been analyzed in the previous sections, but they may not be adaptable to the demands of flexible load-bearing in engineering. In this section, the performance of SHAMLRAS for vibration isolation finite structures is investigated from both experimental tests and numerical simulations. The geometry and loading environment in Figure 17a were used for both the transmission loss experiments and the COMSOL simulations. To evaluate the vibration isolation performance of the SHAMLRAS structure, the frequency response function (FRF) was calculated with the left-hand panels of 5 mm thickness as the excitation input and the right-hand panels of 2 mm thickness as the response receiver. The geometric and material parameters shown in Table 2 were used as the basis for obtaining the specimens required in the experiments through 3D printing, and the experiment was carried out based on the transmission loss experimental setup shown in Figure 17b. After the printed specimen has been fixed to the vertical vibration table through the 5 mm thick prefabricated base, the excitation and response data from the experimental test are calculated using an accelerometer to obtain the spectrum in Figure 17c. We used acceleration as the input signal for excitation in both experimental and simulations, and the calculated spectra were placed in Figure 17c for comparative analysis.
Conclusions
In this paper, lightweight single-phase acoustic metamaterials with low-frequency bandgap properties are designed by combining a square honeycomb structure with multiple Archimedean spirals.
The existent Archimedean spirals are shown to open multiple complete bandgaps below 500 Hz with Bloch's theorem and finite element analysis. The vibrational modes are discussed for the dispersion curves near the first three bandgaps at the boundary points of the IBZ. The generation of low-frequency bandgaps was found to be related to the degree of local resonance of spirals. Characteristics of the directional propagation of elastic waves in SHAMLRAS periodic structures and the attenuation properties of in-plane vibrations are analyzed using dispersion surfaces and group velocities at specific frequencies. Optimizing the width and position of the low-frequency bandgap to enhance wave attenuation can be achieved by adjusting the material parameters, the spiral arrangement, and the parameters of the control equations. In this case, the spirals in circular arrays, the increase of the circle distance d and the inner diameter R 2 within a limited dimension are helpful in obtaining a lower frequency bandgap. In the end, the spectrum obtained through transmission loss experiments and COMSOL simulations are used to demonstrate and verify the vibration isolation performance of SHAMLRAS with limited dimensions. That proves the great potential of the SHAMLRAS structure in achieving low-frequency noise and vibration control using single-phase materials. | 2022-01-09T16:17:25.708Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "8d12eb9bb36eb791a031a723c697310bb1ed353b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8e57b4f8f971fe1f51329eba2dfebd5ce2ac6969",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236497921 | pes2o/s2orc | v3-fos-license | Sealing the leaky pipeline: attracting and retaining women in cardiology
Multiple publications have addressed the under-representation of women in the cardiology workforce, and indeed in leadership positions and procedural subspecialities, despite gender parity among medical school graduates. The work–life balance does not appear to be the only determining factor since other specialties such as obstetrics have a adequate representation of women. Vlachadis Castles et al report the results from their online survey of 452 female doctors (both trainees and specialists) from Australia and New Zealand, 13% of whom were women in cardiology. Female cardiologists reported working longer hours and more on-call commitments; significantly fewer women in cardiology reported a balanced life, or that cardiology was family friendly or female friendly, despite a greater earning capacity and an overwhelming majority agreeing that they were professionally challenged whilst intellectually stimulated in their jobs. Our editorial addresses the deterrents to women in cardiology seeking leadership opportunities in all areas including academic, administrative and research positions.
ABSTRACT
Multiple publications have addressed the underrepresentation of women in the cardiology workforce, and indeed in leadership positions and procedural subspecialities, despite gender parity among medical school graduates. The work-life balance does not appear to be the only determining factor since other specialties such as obstetrics have a adequate representation of women. Vlachadis Castles et al report the results from their online survey of 452 female doctors (both trainees and specialists) from Australia and New Zealand, 13% of whom were women in cardiology. Female cardiologists reported working longer hours and more on-call commitments; significantly fewer women in cardiology reported a balanced life, or that cardiology was family friendly or female friendly, despite a greater earning capacity and an overwhelming majority agreeing that they were professionally challenged whilst intellectually stimulated in their jobs. Our editorial addresses the deterrents to women in cardiology seeking leadership opportunities in all areas including academic, administrative and research positions.
Much has been discussed about the 'leaky pipeline' of women in cardiology globally. There is significant under-representation of women in the cardiology workforce, and indeed in leadership positions and procedural subspecialities, despite gender parity among medical graduates. 1 2 A number of diverse challenges in attracting and indeed retaining women in cardiology have been identified (Box 1), not the least of which is the lack of work-life balance in what is often perceived a male-dominated specialty. [2][3][4][5] The paper by Vlachadis Castles et al report data from an online survey of 452 female doctors (both trainees and specialists) from Australia and New Zealand, 13% of whom were women in cardiology. 6 Female cardiologists reported working longer hours and more on-call commitments; significantly fewer women in cardiology reported a balanced life, or that cardiology was family friendly or female friendly, despite a greater earning capacity and an overwhelming majority agreeing that they were professionally challenged while intellectually stimulated in their jobs. 6 Poor work-life balance or perceptions thereof, have been a significant deterrent to women choosing cardiology as a subspeciality. [2][3][4][5] Indeed, worldwide, gender disparity is even greater in procedure-based subspecialities of cardiology, with women comprising only between 4.5% and 7.5% of the interventional cardiology workforce. 4 7 However, it is interesting that subspecialities with as demanding on call-hours, such as obstetrics and gynaecology do not have similar issues of recruitment and retention of women; this is testament to the fact that a lack of worklife balance is perhaps not the only deterrent to female uptake of cardiology as a career choice.
A lack of flexibility in working hours, particularly for women with young children has been previously reported, 2 4 and could be what deters women from pursuing leadership roles, and opting to completely change specialty or leave academia in favour of private practice, which potentially offers greater flexibility in working hours. Indeed, more recently, there has been a shift towards reframing the issue of work-life balance as 'work-life integration (WLI) where the goal is to create synergy between work, home, community, and the private self. 8 To this end, a solution might be the introduction of flexible training programmes for trainees and indeed flexible working hours, job sharing and shift work for consultants. 1 2 In the UK, where a relatively wellstructured less than full-time (LTFT) training programme exists, cardiology LTFT trainees constitute only 4% (compared with 24.2% in paediatrics and 20.3% in obstetrics and gynaecology), 9 10 and predominantly (69%) comprise of women. 10 However, almost a third of LTFT trainees have perceived that they do not get the same training opportunities as full-time colleagues. 10 Additionally, within interventional cardiology some perceive that LTFT is taken less seriously. Perceptions are important, as this may be why far fewer LTFT trainees specify electrophysiology (EP) and interventional cardiology as a career choice.
This may remain the status quo until more of the cardiology consultant workforce, male and female have experienced working LTFT to appreciate how valuable it is to enable meeting family commitments and for achieving the work-life integration. Recent real-life examples of female cardiologists in procedure-based specialties have shown that working LTFT is possible and demonstrates no lesser commitment to the specialty. 7 This aspect is so important because traditionally, women have tended to bear the brunt of domestic responsibilities, and have reported that these competing demands may have hindered their professional development, pursuit of leadership positions and ability to travel for professional advancement, 3 4 thus limiting networking opportunities that are invaluable for career progression and seeking mentorship. It is however encouraging that more recently, significantly more men have also cited family responsibilities affecting their ability to travel for meetings and committee work. 4 The contemporary climate of a shift in major meetings to a virtual/ hybrid platform resulting from travel restrictions due to the pandemic. Along with a shift in attitude towards gender disparity, virtual meetings have increased opportunities for female participation in meetings, on panels and as speakers and in the audience.
Workplace culture has been commented on within cardiology internationally. 3-5 9 11 Female trainees and consultants have noticed more sexism and discrimination, particularly relating to parenting and domestic responsibilities. 4 11 Sexism has detrimental downstream effects often resulting in a lack of professional confidence when working with patients and colleagues, limiting career aspirations, and potentially leading to fewer leadership and professional pursuits. 11 Women are also frequently at the receiving end of 'benevolent sexism', manifest as being asked about plans for pregnancy during fellowship interviews and also, being cautioned against difficult paths ahead.
Furthermore, gross inaccuracies around radiation, especially misconceptions of potential harms of radiation during pregnancy continue to derail female recruitment into interventional cardiology and EP, many of which are largely unfounded. 1 3 These need to be debunked as they deter a talent pool away from a rewarding subspeciality.
So why is there such a push to improve the gender balance in cardiology? A more diverse and better gender-balanced workforce provides more optimal care for patients. 12 This is why we need robust methods of enhancing recruitment and retention of women in cardiology, and tangible solutions to tackle the leaky pipeline at varying stages of careers. Attracting the best talent pool to our specialty, irrespective of gender, is certainly in its best interests. As shown in this paper by Vlachadis Castles, and others, younger people-both men and women-increasingly value work-life balance, stable hours and family friendliness in their career choices. 3-6 A Box 1 Challenges to recruiting and retaining women in cardiology ► Perceived work-life imbalance (WLI). ► Discrimination. ► Radiation concerns. ► Opportunities for career progression. ► Family planning. ► Lack of mentorship. ► Unequal financial compensation. ► Workplace culture. Figure 1 Strategies to enhance recruitment and retention of women in cardiology lack of female mentors and role models in cardiology, has been repeatedly cited as a potential disincentive for the recruitment and retention of women in cardiology, 3 7 9 but with the decline of all male panels (manels) this is being addressed.
Perceptions are important, and cardiology needs to better demonstrate that, despite the many challenges, it is certainly possible to combine family life and a rewarding career, inclusive of research, fellowship and high profile, even within procedural-based subspecialities. 7 'You can't be what you can't see' has been oft-quoted in the realms of academic cardiology. Notably, since the issues of gender disparity were first discussed, 1 the workforce gender balance has been improving, but there is still some way to go.
Globally, an active effort is increasingly being made by cardiovascular professional societies and independent organisations such as Women as One, to address the female under-representation within the specialty. 2 These include the formation of women in cardiology working groups, networking events, mentorship programmes and awards schemes. But perhaps the most effective interventions come in the form of a collaborative effort by the entire cardiology team to promote diversity in the workforce ( figure 1). An active commitment by both male, female and gender-neutral colleagues to support one another, discuss and understand experiences, provide mentorship and promote a more inclusive culture in the workplace will certainly go a long way towards achieving gender parity in cardiology.
Indeed, the issues identified for women in cardiology are relevant for all in cardiology. Ultimately addressing these issues will improve the gender imbalance in our fascinating specialty and help to create a more diverse workforce, resulting in optimal patient care. For all of us, optimal patient care is the common goal. As a biproduct of the process, all working lives will improve. We can see that we are on our way to achieving this change. We just need to accelerate the pace of change.
Twitter F Aaysha Cader @aayshacader, Mirvat Alasnag @mirvatalasnag and Shrilla Banerjee @ShrillaB Contributors All authors contributed equally to this manuscript and reviewed the final submission.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Commissioned; internally peer reviewed.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2021-07-30T06:18:05.802Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "4b58e724c8f491f64f646f3446e72aa16c9a61f2",
"oa_license": "CCBYNC",
"oa_url": "https://openheart.bmj.com/content/openhrt/8/2/e001751.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bfc7b19325454a4af6b87cafacc9f48777260dcf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257420652 | pes2o/s2orc | v3-fos-license | Scaling organizational agility: key insights from an incumbent firm ’ s agile transformation
Purpose – This paper aims to examine the key challenges experienced and lessons learned when organizations undergo large-scale agile transformations and seeks to answer the question of how incumbent firms achieve agility at scale. Design/methodology/approach – Building on a case study of a multinational corporation seeking to scale up agility, the authors combined 36 semistructured interviews with secondary data from the organization to analyze its transformation since the early planning period. Findings – The results show how incumbent firms develop and successfully integrate agility-enhancing capabilities to sense, seize and transform in times of digital transformation and rapid change. The findings highlight how agility can be established initially at the divisional level, namely with a key accelerator in the form of a center of competence, and later prepared to be scaled up across the organization. Moreover, the authors abstract and organize the findings according to the dynamic capabilities framework and offer propositions of how companies can achieve organizational agility by scaling up agility from a divisional to an organizational level. Practical implications – Along with in-depth insights into agile transformations, this article provides practitioners with guidance for developing agility-enhancing capabilities within incumbent organizations and creating, scaling and managing agility across them. Originality/value – Examining the case of a multinational corporation ’ s exceptional, pioneering effort to scale agility, this article addresses the strategic importance of agility and explains how organizational agility can serve incumbent firms in industries characterized by uncertainty and intense competition.
Introduction
The ubiquity of digitalization and disruptive business models is currently reshaping industries and challenging many organizations to pursue large-scale transformations to keep pace in today's volatile, fast-moving business environment (Appelbaum et al., 2017;Parida et al., 2019). Companies' 20th-century models, in which competitive advantage derives from economies of scale, hierarchical structures and complex decision-making processes, are simply no longer fast enough to keep up (Holbeche, 2018). In response, the concept of agility has gained momentum, especially following the emergence of transformational digital technology and its potential to Scaling organizational agility fundamentally change corporations' business models, relationships with customers, competences, products and services and ecosystems (Bharadwaj et al., 2013;Chan et al., 2019;Oliva et al., 2019). According to Doz et al. (2008, p. 65), agility is "a higher-order dynamic capability that is built over time" and thus requires being aware of trends and forces, making bold decisions fast and reconfiguring business systems and rapidly redeploying resources. Such higher-order capabilities include the dynamic capabilities of sensing (i.e. the capacity to identify, develop, co-develop and assess new technological opportunities and threats in relation to customers' needs), seizing (i.e. the capacity to mobilize resources to address opportunities and capture value) and transforming (i.e. the capacity to maintain competitiveness via continuous renewal) that are necessary to addressing new opportunities and threats brought forth by today's new technological environment (Teece et al., 2016). Several studies have indicated, however, that managers experience severe difficulties with implementing agility across their organizations in large-scale settings, or agility at scale (Calnan and Rozen, 2019;Dikert et al., 2016;Sommer, 2019), defined as the ability to spread and drive agility across an entire organization. Likewise, a global study by McKinsey & Company involving more than 2,000 organizations revealed that only approximately 10% of the ones that had recently undergone an agile transformation characterized it as having been highly successful (Aghina et al., 2020). When executed poorly, agile transformations can cause not only frustration but also intense friction with entrenched legacy systems and other aspects of the organization's culture. Annosi et al. (2020a, b), for example, have examined potential pitfalls and negative consequences of implementing agility, including a decreased interest in learning and high levels of stress and argued that large-scale adoptions of agile practices often result in problems with team-team coordination. Other potential negative consequences of agile transformations include decreased efficacy in individual performance amid increased pressure to perform, less access to and the weakened accumulation of knowledge related to decentralized team structures, and the reduced integration of knowledge.
Even so, pursuing agility at scale seems to be worth the risks. Prominent corporations such as Amazon, ING Group and Bosch have demonstrated how large-scale agile transformations can be shaped while maintaining traditional functions in parallel with agile units (Rigby et al., 2018). In fact, companies that are further along in their agile transformations achieve "around 30% gains in efficiency, customer satisfaction, employee engagement, and operational performance" and become five to ten times faster than their less agile competitors (Aghina et al., 2020, p. 2). Even though the literature and practical examples offer various conceptualizations of agility at the project and organizational levels, incumbent firms continue to struggle to identify the concept that best fits their distinct purposes and to deploy it in ways that achieve agility at scale (Kalenda et al., 2018).
Thus, identifying appropriate agile practices, methods and frameworks for scaling agility-that is, successfully improving agility across an organization-requires profound knowledge, expertise and understanding. Along those lines, research has addressed the need for tailored, company-specific agile frameworks because agility hinges on organizational complexity, organizational routines and the renewal of key competences (Annosi et al., 2020a, b). In our research, we thus conceived agility at scale as the ability to drive agility at an organizational level but not as a universally applicable concept in practice. Instead, agility at scale requires a set of capabilities involved in not only allocating resources but also dynamically balancing them to manage uncertainty and maintain flexibility over time (Shams et al., 2020;Teece et al., 2016;Vecchiato, 2015;Weber and Tarba, 2014) as well as a multilayered analysis of the strategic, organizational, team-focused and leadership levels that fundamentally impact the entire organizational system (Girod and Kralik, 2021).
Although scholars have emphasized the need for further investigation into ways of scaling and thereby enhancing agility in organizations (Girod and Kralik, 2021), there is little MD conceptualization, let alone empirical evidence, about how or what is needed to achieve agility at scale. Therefore, we investigated the dissemination of agility from a divisional to an organizational level and studied how organizations scale agility. As a result of our investigation, this paper examines the key challenges experienced and lessons learned when organizations undergo large-scale agile transformations and seeks to answer the question of how incumbent firms achieve agility at scale.
To accomplish our objective, we performed a single-case study to understand the implementation of an agile center of competence (ACC) and the scaling of agility across an organization. The case was a multinational financial services company operating in the insurance and asset management industry that has been exposed to rapidly emerging digital technology, changing behavior among customers, increased industry competition and disruptive threats due to the emergence of innovative fintech firms (Verhoef et al., 2021;Yan et al., 2018). Thus, to keep pace with digital opportunities in today's volatile digital business environment, the company's management announced that the organization would undertake a large-scale transformation-namely, the launch of a global, corporate-wide digitalization agenda and the ensuing opening of an ACC.
By consolidating research on organizational agility and dynamic capabilities, we investigated how an incumbent firm undergoes a large-scale agile transformation and, as a result, can provide a theoretically grounded set of transformative organizational actions as well as several critical lessons for implementing agility. More precisely, by applying the dynamic capabilities framework, we illuminated how an incumbent firm might employ the framework as a means to implement agility and identified a pathway for keeping pace with digital opportunities in the volatile digital business environment. In particular, we analyzed the role of top management in the actions of sensing, seizing and transforming within the context of an agile transformation and identified how two complementary systems, hierarchy and network (Kotter, 2012), are linked through a separate entity-in our case, an ACC-that can serve to accelerate employees' realization of an agile organizational culture and mindset. Herein, we detail how incumbent firms develop and successfully integrate the agilityenhancing capabilities of sensing, seizing and transforming in times of rapid change and uncertainty. By extension, with this article, we contribute to a broader discussion on organizational agility and highlight the importance of merging top management's commitment with the allocation of resources and a mutual understanding of the organization's strategic objectives.
In the remainder of this article, Section 2 outlines the theoretical foundations of dynamic capabilities and organizational agility in the literature. Next, Section 3 explains the case study and its abductive research process, after which Section 4 presents the results of our analysis. Section 5 discusses our findings and condenses them as propositions, after which Section 6 articulates our conclusions, the study's limitations and recommended directions for future research.
2. Theoretical background: dynamic capabilities as a framework for organizational agility In response to increased global competition, new forms of digital technology, disruptive threats and changing consumer behavior, agility has received considerable attention in practice and in scholarly work on management (Harraf et al., 2015). However, given agility's emergent nature, scholars and managers continue to debate what agility and agile mean. Table A1 presents agility-related terms that we identified in literature.
Of all of those terms, this article focuses on organizational agility, defined as "the capacity of an organization to efficiently and effectively redeploy/redirect its resources to value creating and value protecting (and capturing) higher-yield activities as internal and external circumstances warrant" (Teece et al., 2016, p. 17). We understand organizational agility, also termed agility at scale, as the ability to drive agility broadly across organizations via practices, values and behaviors that enable the organizations to become more resilient, flexible and innovative, especially given today's tumultuous markets and the rapid advancement of digital technology (Teece et al., 2016). In a large-scale study of leaders in digital transformation, organizational agility has been identified as the most important factor of success that differentiates leaders from laggards in the agile transformation (Brock and von Wangenheimz, 2019).
The literature often describes organizational agility as a specific higher-order dynamic capability (Doz et al., 2008;Lee et al., 2015;Walter, 2021), one involving the role of strategic management in integrating, building and reconfiguring competences as a means to continuously adjust and adapt ways of creating and capturing value in light of internal and external circumstances (Teece et al., 2016). In that context, it is necessary to distinguish ordinary capabilities from dynamic ones (Winter, 2003). On the one hand, ordinary capabilities, characterized as "'how we earn a living now' capabilities" (Winter, 2003, p. 992), are needed to produce and sell a (static) set of products or services. On the other, dynamic capabilities enable organizations to exploit opportunities and avoid threats by adapting and/or extending ordinary capabilities that allow them to develop new processes, products and/or services for improved speed, efficiency, or effectiveness (Drnevich and Kriauciunas, 2011). Because organizational agility entails several components-proactivity, change, responsiveness and adaptiveness-in sensing and responding to opportunities (Lee et al., 2015), as well as involves strategic sensitivity (i.e. an awareness of changes and real-time sense making), leadership unity (i.e. the ability to make bold decisions fast) and resource fluidity (i.e. the ability to reconfigure and redeploy systems and resources rapidly; Doz et al., 2008), it can be regarded as an important dynamic capability. Indeed, as the digital transformation continues to accelerate and amplify uncertainty, volatility and complexity, organizational agility is a critical dynamic capability that incumbent firms need to sense and seize opportunities and, in turn, succeed in their digital transformations.
In the literature on strategic management, dynamic capabilities refers to "the firm's ability to integrate, build, and reconfigure internal and external competences to address rapidly changing environments" (Teece et al., 1997, p. 516). Those capabilities are reinforced by organizational and managerial competences used to identify and reshape the organizational environment and generate new business models able to address new opportunities or emerging threats (Eisenhardt and Martin, 2000;Ghezzi and Cavallo, 2020). Teece (2007) has identified three primary types of dynamic capabilities-sensing, seizing and transforming-that allow companies to compete and survive in the long term while facing fundamental uncertainty due to highly disruptive business models along with rapid technological change.
First, regarding the capability of sensing, environments characterized by uncertainty require organizations to sense fundamental changes and opportunities in advance in order to keep ahead of rivals (Helfat and Peteraf, 2015;Reeves et al., 2015;Schoemaker et al., 2018). In particular, generative sensing capabilities support activities used to identify technological opportunities, analyze markets, listen to customers and formulate hypotheses on how the future might look. They require managerial insights and vision, accompanied by constant research and the probing of customers' needs and technological possibilities to discover new market opportunities (Teece et al., 2016), which together involve continuous learning, interpretation, scenario planning and creative activity across the organization (Schoemaker et al., 2013). Beyond that, if markets are impacted by continuous change, then the organization's search activities should be both local and distant (March and Simon, 1958). Using typical practices of open innovation, organizations can also incorporate internal and MD external stakeholders able to actively contribute their broad expertise and knowledge (Chesbrough and Appleyard, 2007). Further still, they may deploy new approaches or organizational formats-for instance, establishing separate business units and partnerships with accelerators or corporate incubators-that allow combining activities of exploration with activities of exploitation (Gibson and Birkinshaw, 2004;Weiblen and Chesbrough, 2015). All of those efforts underscore the relevance of broad-based external processes to search for and identify trends, create hypotheses and contemplate possible future scenarios to plan for (Teece et al., 2016). Along those lines, in our study we investigated top management's role in sensing changes and opportunities in relation to scaling agility in their organization.
Second, when sensing new technological or market opportunities, organizations have to seize those opportunities by leveraging new products, services and/or processes. Each organization needs to identify an appropriate business model for defining their commercialization strategy and investment priorities (Agostini and Nosella, 2021;Chesbrough, 2010). Mastering the mechanism of dynamic transformation thus becomes imperative for creating agile organizational structures and optimally exploiting current market opportunities while simultaneously exploring new ones as they arise due to changing environments and emerging digital technology (O'Reilly and Tushman, 2011). In that context, exploitation relates to continuous improvement and efficient implementation, whereas exploration is linked to experimentation and discovery (Bodwell and Chermack, 2010). However, due to the historically entrenched emphasis on efficiency within stable environments, companies continue to struggle in jointly pursuing exploitation and exploration-the former to ensure their current viability, the latter to ensure their future viability-a combination termed organizational ambidexterity (Bodwell and Chermack, 2010;Tushman and O'Reilly, 1996). Scholars have also averred that such ambidexterity can nevertheless create a dilemma, for managers often face difficulties with striking an appropriate balance between the requirements of exploration and of exploitation (Doz, 2020;Lewis et al., 2014). At the same time, efficiency's dependence on organizational structures is nothing new; in fact, research on their relationship dates back to the 1960s (Doz, 2020;Doz and Kosonen, 2010), and companies have long been built to maximize efficiency in stable environments, with well-defined and structural systems, hierarchies, roles and responsibilities (Kotter, 2014). On that topic, scholars have also highlighted that mechanistic management systems involving hierarchy, top-down decision-making, efficiency and/or economies of scale, are most suitable for stable conditions (Burns and Stalker, 1961). By contrast, organizations facing today's dynamic, digital business environments need to develop flexible systems of operational routines that incorporate a network structure of control, authority and communication and that are appropriate for shifting conditions (Burns and Stalker, 1961). Because corporations face a wide range of challenges in realizing both exploitative and explorative capabilities and processes (Burns and Stalker, 1961), it is unsurprising when organizations sense a business opportunity but fail to invest in it. Leadership thus plays an obvious role in making quality decisions, communicating goals and mobilizing resources to capture value from future business opportunities. For that reason, in our research we focused on how dual structures can serve as a vehicle to explore, acquire and scale agile practices in a separate organization and what top management's role is in connecting and aligning that organization with the hierarchy.
Third and last, transforming, as a key to sustainable growth, involves continuous renewal by way of "asset alignment, co-alignment, re-alignment and redeployment" (Teece, 2007(Teece, , p. 1336. To be continuous, transformation requires organizations to adapt routines, restructure departments and/or organizational structures, and be able to recombine and reconfigure tangible as well as intangible assets as markets and technology change (Teece, 2007).
Moreover, to overcome radical challenges revealed by sensing and/or seizing activities, organizations can draw on various transformative orchestrated processes (Teece, 2014), which enable them to rearrange and reallocate their resources in accordance with a new strategy and/ or develop new resources to supplement current gaps in their resource bases (Ambrosini and Bowman, 2009;Teece, 2014). Studies have emphasized top management's crucial role in accelerating organizational change and subsequently increasing a firm's ability to sense and seize new opportunities (Helfat and Peteraf, 2009;Teece et al., 2016). In such research, Harraf et al. (2015) have developed a framework with essential pillars for accelerating the transformation toward organizational agility. One pillar is a common culture, shared values and the importance of a shared organizational vision; the second, linked to benefits of empowerment, is the connection between employees and management; and the third is organizational learning processes that can contribute to a firm's dynamic capabilities and subsequently accelerate its agile transformation (Harraf et al., 2015). In the same vein, Pisano and Teece (2007) have argued that empowering management practices and entrepreneurial initiatives enables a firm to learn. Meanwhile, the ability to cope with an increasing degree of uncertainty-an ability with significant implications for the entire organizational systemrequires strategic thinking, an innovative mindset, the exploitation of change and an unrelenting need to be adaptable and proactive (Harraf et al., 2015). Encouraging bottom-up entrepreneurial initiatives and interaction among all stakeholders both within and beyond a firm's boundaries can ultimately contribute to the organization's transformation (Teece, 2007).
Therefore, the challenge is transforming incumbent firms that are established in stable environments and have grown complacent as a result of long-standing market dominance into adaptive, flexible organizations (Kotter, 2014). Although theoretically compelling, research on the link between dynamic capabilities and organizational agility remains in its infancy. For that reason, in our study we focused on organizational agility and how a successful digital transformation, understood as a continuous process, should be managed to resolve tensions that arise when employees trained in agility confront rigid routines, cultures and structures of their hierarchical organization.
In this article, standing at the intersection of organizational agility and dynamic capabilities, we offer a framework that explores a mechanism for scaling agility. In particular, because "the pursuit of agility requires sensing, seizing, and transforming" (Teece et al., 2016, p. 26), the framework offers researchers a useful approach to understanding how agility contributes to exploiting opportunities and promoting change, as well as offers managers practical guidance in creating, scaling and managing agility. Accordingly, we detail here how firms develop and successfully integrate agility-enhancing capabilities to sense, seize and transform in times of rapid change and uncertainty.
The framework's contribution is an important one, for many companies struggle in their attempts to scale agility in their organizations (Rigby et al., 2018). Despite abundant research on agility in small teams, the ways in which companies can develop the dynamic capabilities needed by agile organizations at a relatively large scale remain poorly understood. Although some literature links dynamic capabilities with organizational agility at the conceptual level, the process of developing and scaling agility as such a capability has yet to be examined. That gap in the literature was the focus of our study.
Research design and data collection
Our research followed an abductive process by balancing existing theoretical conceptualizations with empirical evidence (Dubois and Gadde, 2002;Ince and Hahn, 2020). We considered the abductive process to be suitable for our study because it applies a theory for explaining a phenomenon but not either purely inductively or purely deductively (Spens and Kov acs, 2006). In that sense, our research was aimed at "generating novel theoretical insights that reframe empirical findings in contrast to existing theories" (Timmermans and Tavory, 2012, p. 174).
Our research consisted of a single-case study conducted to understand how an incumbent firm implements an agile transformation at scale (Yin, 2017). Allowing a profound understanding of real-world phenomena that are too complex for surveys, case studies are especially suitable for comprehensively exploring an event to test a proposed theoretical, or conceptual setting (Cha et al., 2015;Ridder et al., 2014;Yin, 2017). In this section, we describe the primary steps of our study, the research context and our methods of collecting and analyzing data.
The company under study is a multinational corporation offering financial services that operates in major markets in the insurance and asset management industry. With offices in approximately 70 countries and more than 150,000 employees worldwide, the company is characterized by distinct business divisions that are hierarchical and managed across several tiers. Although the corporation ranks among the top players in the global market for insurance and asset management, the insurance sector in general is exposed to rapidly emerging digital technology, changing behavior among customers and increased industry competition, along with disruptive threats due to the emergence of innovative fintech firms (Yan et al., 2018). In its shareholder presentation in 2016, the company reported that many of its divisions had already experienced more than 50% business growth in digital markets and a nearly 90% increase in reach supported by social media, as well as expected revenues to grow by more than 30% within the next four years. Thus, the company's milestones were clearly set. However, reaping the benefits of digital business in 2018 and beyond has required a foundation for accelerating and embedding a digital culture. Consequently, management initiated a large-scale agile transformation in 2016 to keep pace with digital opportunities in the volatile digital business environment. Indeed, as a case, the company was chosen due to its experience with two major events: the launch of a global, corporate-wide agile transformation agenda publicly announced at a shareholder conference and the ensuing opening of ACCs. For the latter event, our research team was granted internal access to the company's data, facilities and meetings.
To explore the phenomenon of scaling agility within organizations, we gathered data from various sources to ensure construct validity and data triangulation (Yin, 2014). We developed a semistructured interview guide, tested it and conducted 36 interviews at three time points from 2018 to 2021 in order to gain insights into the context of scaling agility and the agile transformation. In line with previous research, we chose semistructured interviews to enable open and follow-up questions about the topic (Venkatesh et al., 2021;Cooper and Schindler, 2008).
Interviewees were purposively selected based on their experience with and perception of the phenomenon (Cooper and Schindler, 2003), and our sample included personnel with a broad spread of roles and qualifications. Several were internal employees with hands-on knowledge about practicing agile methods, including developers, product owners, stakeholders of scrum teams and agile masters as well as coaches with the expertise to enable others to scale agility. We also interviewed managers and employees who were currently confronting with the agile transformation agenda. Table A2 details the demographics of the employees interviewed in our study; for confidentiality's sake, they are referred to as ID1, ID2 and so on, up to ID36.
The interviews were conducted over the phone or in person, lasted 45-60 min on average and were recorded and transcribed. In all interviews, we followed suggestions for empirical social research and study design (Eisenhardt, 1989;Yin, 2003). As a result, our interview guide had five primary sections. First, we addressed the implementation of agile practices to gain an understanding of agility and appropriate areas for its adoption in the organization. Second, we analyzed the setup of the company's ACC (e.g. the extension of agility into other areas of the organization or challenges that arose with agile practices). Third, the interviews focused on the vision and perception of agile organizations and, fourth, the design of the agile transformation in terms of roadmap planning, the role of management and the means used to disseminate agile practices. We also asked interviewees about obstacles, barriers and solutions that arose in the process of disseminating agility. The fifth and final section addressed requirements and necessary changes for the ongoing transformation to succeed. Altogether, we were able to probe different aspects of agility, scaling and organizational change during the interviews. Because we conducted interviews at three time points in the agile transformation, we slightly adapted the interview guide to follow a specific focus along the agile transformation process. Table A3 details the themes of the interview guide and when its various parts were used during our research.
According to Tellis (1997, p. 3), "Consideration must be given to construct validity, internal validity, external validity, and reliability." To ensure reliability, we therefore developed a formal case study protocol. Meanwhile, to increase validity, secondary data were consulted along with the primary data as a means to refine our findings (Gerbl et al., 2015;Yin, 2017). Sources of secondary data included the company's internal and public presentations, annual reports and relevant handbooks, along with data archived on the company's website regarding its agile transformation and key strategic objectives, onboarding information for employees, training materials, press releases and industry journals. We triangulated all of the data from the primary and secondary sources following the steps outlined by Tellis (1997). By utilizing multiple data sources alongside a research protocol for a single-case study, we increased the overall construct validity and consistency (Yin, 1994). Moreover, we reduced the risk of researcher bias by taking data from various sources and thereby ensured the rigor and richness of our findings (Eisenhardt, 1989;Yin, 2003).
Data analysis
In data analysis, we manually coded the data in an iterative process consisting of testing, comparing and retesting each other's codes to reduce intercoder discrepancies. All coders were experienced in the field as well as in applied content analysis (Duriau et al., 2007). During coding, the coding guidelines were constantly refined, based on mutual exchange and aligned with interpretations of the codes following a two-step process (Venkatesh et al., 2021). First, we created first-order codes based on interviews with agile experts. For instance, a recurring theme was "Customer value orientation" following market intelligence from new offerings from competitors. That theme was merged into the first-order code "Customer and market signals." Data management software was used to manually analyze interview data. Second, we derived literature-based codes as overarching themes based on theoretical concepts. For that purpose, we searched and reviewed literature on agility, digitalization and dynamic capabilities. In analyzing the literature and investigating theories, methods and results, we iteratively moved back and forth between first-order codes and overarching themes to derive second-order code categories (Murphy et al., 2017). The coding scheme is illustrated in Figure A1. With reference to our coding framework, we paraphrased individual statements, generalized them and derived our results (Mayring, 2015). Figure 1 illustrates the study's abductive research process.
Findings
In this section, we present the results of our data analysis by highlighting exemplary quotations that emerged from the interviews. First, we illustrate the firm's identification of opportunities and threats as a major trigger for its transformation and highlight its strategic response. MD Second, we explore how the ACC's setup contributed to cultivating agility and related competences, and, third, we focus on the dissemination of agility across the organization.
Sensing: exploring opportunities and threats
Our case analysis revealed that the firm, a multinational corporation, is exposed to uncertainty due to the proliferation of digital technology and disruptive threats at the hands of major players such as Google, Apple and Amazon. Such exposure was underscored by the corporation's shareholder presentation in 2016, which stressed the rapid emergence and increase of innovation in its business environment in relation to its own business model. The mentioned companies, however, enjoy the advantages of being data-driven and well-versed in end-to-end IT-related operations, and the same can apply even to nascent startups in the financial and insurance service landscape. Our interviewees thus emphasized the urgency of taking action in order to remain viable: So, if you have competitors such as Google or Amazon, then you have to be able to move faster, and you cannot do that with a traditional approach. (ID19). I see a threat from fintech firms trying to dig into our business model or in that other insurers have long since jumped on the agile bandwagon and designed their products much faster and more flexibly. (ID22).
That means that if we, as an enterprise, cannot change quickly, we will die out sooner or later. (ID16).
The leaps across industry borders by new competitors could also be connected to needs among new customers. Along those lines, one software developer stated: Clearly, the insurance industry is no longer separate from everyone else. If Netflix's customers can cancel their subscriptions quickly, then why do I have to write a letter to an insurance company? Even if I am lucky enough to meet a cancelation deadline, or have to do that three months in advance, then I have to wait another six months until I get an answer. (ID19).
Top management at the firm was aware of the significant risk of competitors and, in response, formulated a strategic shift supported by external consultancies. The newly appointed CEO, for instance, introduced a so-called agile transformation agenda, which, according to data from the company, comprised five key strategic objectives: (1) a client orientation, (2) pervasive digitalization, (3) technical quality, (4) new areas for growth and (5) an integrative culture of performance. Pressured as a key market player, the firm developed the agenda to strategically focus on better understanding customer segments, seize new cross-selling opportunities and achieve higher retention rates. Above all, creating superior customer value was the top strategic objective for all subsequent actions, as mentioned in an interview with an IT consultant: The goal is to be the market leader and to be the biggest. That means we have to get to markets faster with swifter product cycles and perhaps also simpler products that customers can choose on the Internet without having any insurance expertise. (ID22). However, in view of the implementation of that priority, interviewees noted that using traditional structures and management tools were no longer appropriate. As a department lead stated: As long as we are hierarchically equipped, we are not customer-centric. As long as we report upward in the hierarchy, we are not committed to the customer. That is why the process of becoming agile is sensible, because only then can you really understand the customers' needs and really align all of your actions accordingly and deliver faster. (ID14).
Within the organization, the understanding of an agile approach covered several aspects, including leadership, transparency in work and a focus on giving customers easy-to-use digital services. In the domains within the firm found to suit such an approach, front-end applications with interfaces were prominently advocated for working on agile teams. Soon enough, that perception became distilled into the notion that value for customers could be increased only by creating end-to-end responsibility in applications. As one software developer emphasized, The greatest benefit that agility can yield is creating end-to-end responsibility. . . across departments, maybe across companies,. . . to work better with each other and not against each other. (ID21).
Another prevailing drive was to level up against competitors along the entire value chain. However, despite recognizing market pressure and the virtues of following an agile approach, members of the organization reported experiencing difficulties and uncertainties with the agile transformation agenda. In fact, numerous internal debates about how to change the multinational corporation had arisen following its failed bottom-up initiative a few years prior. According to the company's data, a first agile transformation approach failed to garner support from a sufficient number of employees due to management's lack of commitment in the form of attributed importance, understanding and driving force.
Regarding that initial attempt, one interviewee indicated that management had not recognized the concerns of personnel who require clear direction throughout the transformation. As a department lead added, "We really need a fundamental decision from the very top. At least the CIO should say-even better the CEO-"Well, that is where we want to go" (ID14). Ultimately, the matter was addressed at a turning point in the transformation process, when the CEO publicly claimed to be a front-and-center part of that process, not a victim. The CEO's claim heralded a further push for digitalization by shaping and organizing the corporation, via an agile transformation, into a fully customer-centric, end-to-end digital company. Based on an agile approach, the goal involved enhancing flexibility in reacting to market changes and adapting the product portfolio accordingly. The agile transformation received a budget of up to V700 million, partly to establish ACCs, which were expected to promote the digitalization of the company's product portfolio. It also promised to address both the back-end software calculation of insurance rates and the front-end user interface, particularly with new channels such as insurance apps on smartphones. Figure 2 illustrates the incumbent firm's agile transformation agenda, including the transformation's detailed strategic objectives announced publicly at a shareholder meeting in 2016 and the founding principles of the first ACC launched in 2017.
Under the guiding principles of driving digital initiatives, becoming centers of digital knowledge and fostering digital culture, ACCs embody the agile transformation. The mentioned guiding principles were again substantiated by the six pillars that formed the foundation of an understanding of agility on the teams. From a business side, the product teams in the ACCs would address both the back-end software's calculation of insurance rates as front-end user interfaces, with new channels such as insurance apps on smartphones. Figure 2. Agile transformation agenda Scaling organizational agility Thus, business divisions were able to allocate complete units from an end-to-end perspective in the new setting.
MD
Additional prominent examples of sensing, including activities to identify technological opportunities, analyze markets, listen to customers and formulate hypotheses on how the future might look, appear in Table A4. 4.2 Seizing: scaling agility with an agile center of competence (ACC) The ACC that we studied was implemented in 2017 in a newly rented site outside the company's existing campus and initially hosted only a couple of teams. At the time, because the company started the location from scratch, the key challenges primarily revolved around developing a functioning work environment for the teams-among other things, getting desks, working IT infrastructure and Internet access. After a few months, however, those pain points had been resolved: I think that hardware and infrastructure formed the basis of a lot of our problems at the beginning, when we made a lot of progress. It was not clear what kind of workstation equipment was available or what kind of technical infrastructure the teams used, meaning stuff like the continuous integration platform, cloud environment, and that kind of thing, not to mention how the rooms were equipped. (ID03).
The ACC constituted the operative kickoff for the agile transformation and consolidated training for employees about how to work in an agile setting. Inside the ACC, teams have developed digital insurance products and services while observing agile values, methods and team constructs. Beyond that, the ACC has provided employees the opportunity to experience and learn agile work methods in a completely new environment. The distinct combination of agile methods, an isolated environment and team-based constructs has proven to be ideal for an inherently agile setting. However, our case study also revealed that the agile work environment did not automatically reflect a connection between strategic objectives and operational perspectives, as one agile master explained: It is still not clear to me what problem we are trying to solve by making the entire organization agile. That is why it seems to me that we are now going agile because it is cool and because everyone's doing it. Sometimes, it was a bit more like we wanted to become more efficient. So, the question is whether you need agility to do that. (ID18).
Other critical voices pointed out that best practices borrowed from born-agile companies would eventually collide with traditional hierarchy-based structures, culture and leadership models. In response, management has sought to acknowledge entrenched legacy systems and introduced a dual-speed architecture. On the one hand, because the existing operations model with established hierarchies, workflows and functions cannot be dismantled, it is instead maintained for the ongoing exploitation of business. On the other, in a separate organizational setting, agile approaches can be explored, acquired and scaled while being segregated from pre-existing structures. Overall, the ACC combines a pool of newly introduced work approaches and the space to experiment while maintaining distance from traditional organizational proceedings. Indeed, that purpose guided the foundation of the ACC, which was established at an external location at a certain distance from headquarters and built on six pillars of agility. Table 1 details the six pillars structuring the ACC, all of which originate from a user manual within our secondary data intended as an onboarding handbook for employees joining the ACC.
Above all, the organization's way of discussing, finding consensus and dealing with fundamentally new roles and processes yielded new forms of collaboration. With the emergence of new roles and positions unlike the functions within the preexisting organizational model, extensive expertise needed to be accessed from external consultants.
MD
The influx of external facilitators combined with a steep learning curve in practices enabled an initial functional version of the ACC. Little by little, teams received necessary support with testing new work methods and started to successfully devise products in lean, agile processes. In a rather brief period, the popularity of the ACC increased rapidly, and the number of teams working in the facility has more than doubled over time. During our first round of interviews, an IT consultant noted, "We currently have only 22 agile teams, which accounts for approximately 5-8% of our overall portfolio" (ID23). Two years later, however, internal company data evidenced that the number had increased to as many as 50 teams. Given those results, the corporation founded a second ACC at another location, and three years since its opening, approximately 400 people were working in a special digital division with 50 agile teams.
Despite the rather small number of personnel working in the ACC compared with the entire workforce, many interfaces in the digital value chain have been affected by the ACC's teams. One major impediment in particular revealed dysfunctional collaboration with nonagile departments in the organization but outside the ACC-to be specific, traditional organizational units primarily dealing with the back-end parts of applications, legal and regulatory departments and operations with low complexity in planning customer interaction and market volatility. The ACC's focus, by contrast, was front-end applications, particularly for online distribution and the virtual settlement of insurance Pillar Description Customer value (1) The aim is to solve customers' specific problems by developing intuitive, easy-touse products and/or services (2) Customers are involved in product creation process to understand their needs and in testing initial ideas (3) Customers are asked directly for feedback, which is then incorporated into further iterations of development Lean startup approach (1) Minimum viable products (MVPs) are developed in a lean organizational structure (2) Prototypes are created within approximately 100 days in order to demonstrate early functionality and provide testable software (3) If customers approve, then the MVP is further developed Iterative funding (1) Iterative rounds of funding begin after 100 days (2) Agile teams are formed in the ACC, with a product owner who takes the overall responsibility for the project (3) Based on the initial results, the implementations are further financed, modified or scrapped at regular intervals (i.e. after customer and technical tests) (4) Financing is oriented toward funding rounds for startups New work methods (1) The work is done across departments, on cross-functional teams with a broad field of expertise (e.g. operating organization, IT, and marketing, or in specialist units or divisions (e.g. sales) (2) Co-location ensures lean coordination processes, short feedback loops, and high quality (3) The teams work together on their topic in one room without interference (e.g. teams focus 80% of their time on their project) and use collaborative tools (e.g. chats) to communicate with each other Agile practices (1) The ACC enables experienced agile coaches to teach knowledge about agile practices (2) The work is performed using agile methods (3) The development is based on methods such as pair programming and scrum Digital infrastructure (1) The infrastructure enables software to be scaled and adjusted (2) Requirements for information security and data protection are met throughout (3) Various forms of technology enable test-driven development Scaling organizational agility processes, which automatically prompted different operating speeds in communication, the creation of product increments and planning horizons. Therefore, during the initial interviews, interviewees highlighted severe tension between the ACC and major parts of the organization's legacy structures. Coordination efforts in response were underscored by a product owner: Integrating the product into the overall product landscape is very challenging and still one of the biggest barriers, because it is outside the team setting and therefore dominated by classic project management. (ID17).
Even after a year, interviewees revealed that cross-linkages to departments outside the agile setting of the ACC continued to face serious obstacles: Where challenges are most likely to be observed is in contact with the outside world-units that have not gone agile. In the past, we were used to knowing more or less precisely what was going to happen and when in the next two years. Now, we again want to have that pseudo-certainty in planning while suppressing the fact that those plans usually become quickly outdated, change, or have to be discarded. (ID14).
Meanwhile, the branching out of teams from the ACC and its integration into the larger organizational setting turned out to be a concern. Table A4 shows exemplary quotations about agility-oriented seizing and scaling capabilities in the ACC.
Transforming: disseminating agility across the organization
Having implemented the ACC in a separate setting, the incumbent firm's challenge continued to be disseminating agile practices at the organizational level, as highlighted by one experienced agile master: How would I scale agility? I would stick to the ACC concept for the time being. So, I would say every team, no matter where it comes from. . . has to go through the ACC. And, for me, that means two tasks: the managers have to provide a framework. They have to provide the system that allows agile working. And the second step is that people have to understand what is required of them. They also have to be trained. (ID23).
For the agile transformation to succeed, developing an appropriate scaling framework and embedding it in the organizational structure proved to be pivotal. Because all frameworks for scaling agile methods vary in complexity, there is no one-size-fits-all framework, and different requirements have to be considered. The concrete task for top management was thus to expand agility from the project to the organizational level. Our findings clarify, however, that agility can be achieved only with continuous transformation-oriented effort and is linked to adapting routines, effecting cultural change, modifying structures and being able to reconfigure assets. Therefore, to disseminate agility, the firm trained personnel from the hierarchy-based structure in the facility and thereby imparted capabilities essential to working on an agile team. Despite the relevance of that approach-after all, achievements in agility heavily depend on employees' knowledge, capabilities and access to informationconcerns remained about integrating agile projects at an organizational level in ways that would yield business value without interfering with legacy structures. Simply returning newly formed agile teams that had compiled new product features into a rigid, non-agile environment characterized by traditional thinking and working would have proven dysfunctional. For that reason, a new design needed to be established to assimilate the organization's agile project-driven endeavors, and, in doing so, equal standards had to be imposed to create the same prerequisites for work across organizational divisions. Thus, following the standards built on the six pillars of the ACC's foundation, digital equipment, office space design and team constellations became increasingly similar across different sites and organizational units.
MD
From the perspective of employees, different stages of the transformation have been associated with different difficulties. For instance, although top management provided guidelines for employees that included essential information concerning onboarding within the ACC, it could not clarify to everyone how the change ought to proceed in the larger organizational context. Our findings indicate that the direction and communication strategy for agility at scale has indeed remained vague and prompted diverse challenges. Communication events with top management present have continued to be very limited and failed to foster open exchange about the future direction of the agile transformation. Overall, such trends reveal that the transformation is ultimately not static but dynamic and requires continual change and renewal in an ongoing process reified by a change agent: We are not going to say, "Well, we are done with the transformation now. We are an agile company now." Instead, the objective would be achieving a state where we can admit that we are never finished. Every day, we face a demand for something new, but now we have the ability to change. (ID16).
The agile transition from static models to truly adaptive organizations is a multilayer endeavor in which agile capabilities are employed and continuously developed. Figure 3 illustrates the incumbent firm's ongoing agile transformation and highlights the dissemination of the three pillars of agility achieved by using an ACC, which can strategically serve as an accelerator of organizational agility.
As shown in Figure 3, employees from the incumbent firm's various divisions were transferred to the ACC to learn about agile practices and receive intensive hands-on experience. Following the six mentioned pillars, they work as product owners, developers and agile masters on certain digital products following agile practices. After a given period or a major event (e.g. the launch of an application), employees are relocated outside the ACC with the aim of disseminating agile practices and the accompanying mindset across the organization. Since our first inquiry, hundreds of employees have undergone that process as part of the firm's agile transformation to scale organizational agility. Exemplary quotations on transforming and disseminating agility across the organization appear in Table A4.
Discussion
This article illustrates how an incumbent firm has transformed while scaling agility broadly across the organization as a means to cope with challenges presented by disruptive digital Figure 3. Incumbent's ongoing agile transformation process Scaling organizational agility technology, tumultuous markets and rapidly changing business environments. Our case study revealed a sizeable gap between the popularity of the concept of agility and understandings of the underlying principles of agile scaling and agility-enhancing capabilities. As a result of our detailed application of Teece et al.'s (2016) dynamic capabilities framework, we revealed the agility-enhancing capabilities of sensing, seizing and transforming in the organization in times of digital transformation and change. Table 2 shows how such an incumbent firm can employ the framework as a means to identify pathways of agile practices to implement agility throughout the organization. In what follows, we abstract and organize the findings according to the framework and offer propositions about how companies can achieve organizational agility by scaling up agility from a divisional toward an organizational level.
Exploring the fit between market signals and internal capabilities
Our findings indicate that the incumbent firm has to build sensing capabilities in order to identify business opportunities (e.g. digital market channels, cross-selling opportunities and ways of ensuring higher retention) as well as defend market shares against rivals.
Our case study revealed that the incumbent firm's top management, over time, was highly cognizant of and sensitized to handling the latest industry trends and market disruptions. On that count, other scholars have highlighted top management's role in implementing agile strategies and employing them throughout their organizations (Holbeche, 2019a, b;Meredith and Francis, 2000). In our case, the incumbent firm (2) The ACC serves as the key accelerator for producing agile teams that are reintegrated into the organizational setup (3) Organizational network structures have to be established to achieve the same standard (4) Transforming requires rethinking beyond structures, organizational functions, management practices such as budgeting, incentives, and measurement systems and resolving biases from the previous standard of work Activation of internal and external change agents Repeated entrenchment of best practices and key learning in organizational structures Holistic transformation Table 2. Incumbent Firm's sensing, seizing, and transforming activities MD enhanced its sensing capabilities by integrating internal and external sources, including customers across markets, vendors and external consultancies and industry experts. That observation aligns with the findings of Teece et al. (2016), who have highlighted the importance of searching locally and broadly across technical and market domains to gather the relevant information for internal teams. Especially when digital transformation is a disruptive threat, an open mindset of senior executives is essential (Stadler et al., 2021). As Andy Grove, former CEO of Intel, famously said, "When spring comes, snow melts first at the periphery, because that is where it is most exposed" (Day and Schoemaker, 2004, p. 128). To sense early warning signs of change, executives have to develop an awareness of disruptive digital competitors, changing consumer behaviors and disruptive digital technology, as well as understand how all of those aspects affect their own business model. In other words, they need to develop digital sensing capabilities useful in scouting trends, understanding scenarios and establishing a longterm digital vision (Warner and W€ ager, 2019). To encapsulate those needs, we thus offer our first proposition,
Proposition1.
Top management needs to develop sensing capabilities using formal and informal sources inside and outside the organization to clearly view digital opportunities and threats, to understand how they affect the business model, and to set a clear direction for digital transformation.
Mobilizing resources to scale agility
Over the course of our analysis, we discovered that the incumbent firm's initial attempt at an agile transformation was hamstrung by its neglect to address the sensing and seizing actions at the managerial level. Such neglect negatively affected its ability to reconfigure resources to meet changing market conditions at the time. In essence, incumbent firms such as the one that we investigated continue to face severe challenges imposed by their organizational structure, culture and leadership during their quests to combine internal stability with external agility. Mobilization capabilities, however, require management's ability to defy traditional decision-making rules and processes of resource allocation (Teece, 2007). On that point, our findings complement Tushman and O'Reilly's (2002) concept of having exploitative and explorative capabilities concurrently or else facing severe challenges in allocating adequate resources toward a sensed opportunity. Our case shows how the incumbent firm rallied a second, more comprehensive attempt at achieving an agile transformation when emerging digital possibilities spurred management to drastically and rapidly shape and organize a new transformation. In the second attempt, the ACC served as the intermediate link that accelerated the transition of people between traditionally managed and agile projects. On that note, Kotter (2012) has described two complementary systems: one driven by hierarchy, the other characterized by systems of networks. Transitioning between the two systems remains voluntary for all employees, albeit without any further specifications about how such a transformation can be achieved. In our case, the ACC extended Kotter's (2012) construct by linking the two systems, thereby creating a bridge for the transformation to an agile organization. The transformation began in a small setting, evolved and gained traction, as shown by the increased number of teams. Thus, our study has shown that incumbent firms have to rethink traditional structures, functions and management practices as well as resolve biases in their previous ways of working. For speed and agility, incumbents need fundamentally different ways of sensing information, seizing opportunities and implementing them. As Kotter (2014, p. 12) also has observed, "All successful organizations operate with a dual system more or less during the most dynamic growth period in their lifecycle." Thus, we also propose, Proposition2. The successful scaling of agility requires dual structures, in which agile practices are explored, acquired and scaled in a separate organization, launched and sponsored by top management, and with top management's support, connected and aligned with the hierarchy.
Managing an agile transformation
To address uncertainty in today's volatile digital business environments, organizations seek to develop transformative capabilities, which focus on effecting continuous organizational change (Helfat and Peteraf, 2009). After all, integrating, building and reconfiguring internal and external competences are vital activities for developing organizational agility, and viewing organizational agility within the dynamic capabilities framework can elucidate such multilayer endeavors, including major challenges and setbacks confronted during the transformation to agility. Transformative capabilities are important because they support organizations in reconfiguring existing resources for new digital strategies as well as in building or accessing new resources to supplement current gaps in their resource bases. As explained by Teece (2007), a successful continuous transformation requires routines to be adapted, departments or organizational structures to be restructured and assets to be recombined and reconfigured. Although training people from the hierarchy in the ACC to develop essential skills and capabilities in agile principles worked well, transferring them back to a more rigid, non-agile environment of the hierarchy to disseminate agile practices turned out to be challenging. To ensure that agile practices are adopted at scale, all routines, all structures and the culture need to be adapted. That process is a continuous one, and employees expected to disseminate agile practices need the backing of top management, along with clear directions and communication on how the change is being implemented. Therefore, we additionally propose, Proposition3. Successful transformation needs to be understood as a continuous process that requires (a) top management's extensive efforts in addressing the challenges that employees trained in agility face when confronted with rigid routines, cultures and structures of the hierarchybased organization; (b) continuous direction and support for their integration back in the hierarchy; and (c) a clear communication strategy.
Conclusion
Organizational agility is not a one-size-fits-all solution precisely because requirements for agility are sensitive to the organizational context (Teece et al., 2016). Therefore, a general framework is needed that gives managers guidance in their decision-making about agile activities and how to scale them. In our research, applying the dynamic capabilities framework to study and understand agile transformation proved to be very useful (Teece et al., 2016) and revealed that a successful agile transformation requires a specific set of activities in sensing, seizing and transforming. As described in the literature on dynamic capabilities, sensing is a necessary but insufficient condition for a successful agile transformation (Schoemaker et al., 2018). Sensing activities have to be complemented with "new systems that take advantage of external changes" (Schoemaker et al., 2018, p. 21)in our case study, an ACC. For an agile transformation to be sustainable, routines, structures and assets need to be adapted (Teece, 2007). As revealed by our case study, however, organizational agility as a dynamic capability is difficult to develop and scale in organizations. In response, the dynamic capabilities framework offers a useful guideline that managers can follow to decide which activities are needed for sensing, seizing and transforming in order to achieve agility at scale.
MD
Our findings contribute to the literature on organizational agility by showing how an organization can disseminate agility in a large-scale setting. We provide a theoretically grounded set of actions oriented toward organizational transformation drawn from the dynamic capability's framework, actions that focus on the organization's ability to sense market opportunities and threats and to exploit them by reconfiguring resources, processes and structures, all to adapt to a changing environment. Following Teece et al.'s (2016) dynamic capabilities framework of sensing, seizing and transforming, we have identified assorted transformative actions, an endeavor that extends literature on agility as a dynamic capability (Doz et al., 2008;Lee et al., 2015;Walter, 2021) and highlights the role of top management for successfully scaling it. Our work also complements research investigating the role of dynamic capabilities for digital transformation (e.g. Warner and W€ ager, 2019) adding the role of agility for sensing, seizing and digitally transforming. On top of that, we also contribute to the idea of dual structures (Kotter, 2012(Kotter, , 2014 by showing how the concept can be used to address the challenges of scaling agile in a hierarchy. Our study has yielded several practical insights. First, our findings indicate that corporations need to develop capabilities in sensing, seizing and transforming in order to be more resilient and flexible when external events require rapid adaption. For instance, to put sensing capabilities into practice, managers should regularly cooperate and maintain proximity with external stakeholders by conducting workshops to anticipate future threats and opportunities. Second, the allocation of resources (i.e. seizing) is related to top management's strong commitment to capturing value from emerging opportunities and implementing a clear communication strategy to do so. Third, incumbent firms should incorporate network structures in parallel to their hierarchical ones, ideally a separate agile entity characterized by flat, decentralized structures with novel roles entrenched in the established system. We believe that such an approach can accelerate the transformation to organizational agility. To aid such endeavors, Figure 4 provides practical guidance for organizations and executives to achieve better organizational agility. Therein, based on our discussion, we developed a sequence of questions that incumbent firms could walk through to Scaling organizational agility develop and successfully integrate agility-enhancing capabilities to sense, seize and transform in times of digital transformation. Our findings come with some limitations that offer opportunities for future research. First, we extensively studied the transformation of an incumbent firm toward agility; however, lingering criticism of the case-study methodology's dependence on single cases precludes any generalizing conclusions (Tellis, 1997). Second, while the time frame of four years allowed an in-depth understanding of the transformation process, it did not allow observing the full transformation. Third, another limitation may be potential bias due to our study's focus on a single industry. In that light, we encourage scholars to conduct empirical studies at a larger scale in different settings to examine congruency among results.
"The capacity of an organization to efficiently and effectively redeploy and redirect its resources to value-creating (and capturing) higher-yield activities as internal and external circumstances warrant" Teece et al. (2016, p. 17) Organizational agility "A core competency, competitive advantage, and differentiator that requires strategic thinking, an innovative mindset, exploitation of change and an unrelenting need to be adaptable and proactive" Harraf et al. (2015, p. 675) Strategic agility "A meta-capability that comprises the allocation of sufficient resources to the development and deployment of all specific capabilities, and further refers to the ability to stay agile through balancing those capabilities dynamically over time" Shams et al. (2020, p. 2) Strategic agility "The ability to remain flexible in facing new developments, to continuously adjust the company's strategic direction, and to develop innovative ways to create value" Weber and Tarba (2014, p. 5) Strategic agility "The ability to continuously adjust and adapt strategic direction in core business, as a function of strategic ambitions and changing circumstances, and create not just new product and services, but also new business models and innovative ways to create value for a company" Vecchiato (2015, p. 29) Enterprise agility "The ability to adjust and respond to change" Sherehiy et al. (2007, p. 445) Corporate agility "The capacity to react quickly to rapidly changing circumstances" Brown and Agnew (1982, p. 29) Table A2.
Section 5: The agile transformation process (inquiries, 2019 and 2021) 24) What were the key challenges during the transformation? Agile measurement and performance 25) In your opinion, what is the most important indicator of agility in your organization today? 26) How do you measure the success of agility?
Does competitive agility help to better achieve project, unit, and/or organizational goals? Has anything changed in the definition of those goals? 27) In your experience, where has the adoption of agile practices produced little output?
Where were expectations not met? Agile management and employees 28) In addition to the introduction of agile practices such as scrum, what changes have occurred in the firm's organizational structure? 29) What impact do agile practices have on teamwork? What changes have you observed in that regard? 30) What is the new understanding of leadership at the firm?
What role do leaders (e.g. management) play in the agile transformation? 31) How does the corporation's understanding of agility affect manager-employee relationships? 32) How is a culture of trust fostered at the firm? Change management 33) What would you change about the corporation's approach to the change process? 34) What have you learned from the experiences of teams that urgently need to change at the organizational level? 35) Imagine that it is five years into the future. What is viewed as having been the most important building block for making the corporation a success? Closing 36) Is there anything that you would like to add or comment on from your side that we haven't talked about? Table A3.
MD
Phase Quote Interview ID Sensing: exploring opportunities and threats "For a while, we had discussions about whether you can only apply agile practices in the front end or also back-end applications such as inventory systems, and so on. I'm more of the opinion that you have to apply agile practices in the entire specs and shouldn't just do half of it, because we also need the interplay between front-end and back-end in the interaction of the products."
ID12
"Up to now, companies have only lost their way since individual decisions were made somewhere at the very top, and then were rigorously implemented top bottom, completely bypassing customer benefits and needs. Boom. And then no one needed Nokia anymore, hmm (laughs)."
ID14
"Why is this necessary? (. . .) What I do notice is that a lot of colleagues, especially at a higher strategic level, are talking about the fact that the market is changing at high speed, which is something that we, as end users, are very much aware of. This means that there are new products developed considerably faster. Many more suppliers are entering the market quicker, simply because of digitization. This means that no matter how large or small my company is, and this organization is no exception as the market leader, I must try to adapt to new conditions very quickly."
ID16
"So greatest benefit that agile practices can bring, is just to create the end-to-end responsibility and, just make that possible, that across departments, maybe across companies, if it's about technology, to work better with each other and not against each other."
ID21
"Our management then went to America to Silicon Valley, and they came back talking about test-driven development and agility."
ID 36
Seizing: scaling agility with an agile center of competence (ACC) "I think that is the biggest success factor of the ACC, because we started in small steps with product increments (.
. .) and
we have teams creating something valuable after half a year and with each further release." ID16 "A current barrier is creating physical space for the teams. Our current infrastructure is bursting at the seams. Everyone wants and needs space."
ID19
"And you are not used to it in this organization, because everyone has their silo (. . .) you have to break up (the silo structure), to really become a team that interacts with each other. That was the biggest challenge we had to learn. It sounds totally banal, but it was truly hard."
ID23
"So, that we've already got the basic right, and agile teams have learned a lot of methodology and mindset (. . .) and yet, agile scaling is still ahead of us. But I still lack the scaling framework with fixed architectures, with program views on several agile teams."
ID25
"The people there (ACC) were freed from the daily work and could do new digital product developments (.
ID 36
Transforming: Disseminating agility across the organization "C-level management support and an Agile mindset seem to be the crucial factors (.
. .)."
ID10 "And since a few days ago [board member] wrote an article where finally, for the first time after twelve years, our highest boss declares what he understands by organizational agility. It coincides quite well for me with what is also in our textbook." ID22 (continued ) "Now with this agile scaling initiative, meaning a clear expansion, we look at what already worked well in the ACC, what we can adapt, but also what we have to change. This is because in the ACC we have a single-team context, i.e. we have one team, one backlog, one Product Owner, and in scaling initiative we will combine several teams into tribes. So for the first time we have a scaled setting and that is of course another level."
ID28
"You need management commitment, that was also clear learning from the first agile change approach. The management must understand there is a behavioral change, and it starts with themselves. And the second learning embraced, that we only applied agile practices in front-end IT teams, which per se is nonsense, because agility says you need cross-functional teams. Now with the new agile transformation agenda we did it completely different from the very beginning."
ID30
"So of course, the biggest challenge is, when you introduce new ways of working in a large or structures in an organization you always have a lot of resistance. By now it manifests that there has already been a significant change, especially in the classic hierarchy (. . .) "There is also a change in employees' mindset (.
. .)."
ID 31 "There are three success factors that have made us strong. First. the setting of incremental, iterative learning development, ultimately applying the agile principles ourselves (in the ACC). Second, then there's the principle of one-some-many, i.e. saying I'll think about an implementation, a pilot, and if it works, I'll try it out in two or three other domains of the company. And if it works there, I proceed with disseminating it further. Building the whole thing up in an evolutionary way. And the third is to look first at processes and then at structures." ID34 MD Figure A1. Coding scheme Scaling organizational agility | 2023-03-09T16:10:46.552Z | 2023-03-09T00:00:00.000 | {
"year": 2023,
"sha1": "1dd584304ba2afb86602d88473881d75b7996fd9",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/MD-05-2022-0650/full/pdf?title=scaling-organizational-agility-key-insights-from-an-incumbent-firms-agile-transformation",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1f4ff9ae2ad64a458fa6ea173ae3127caf0b348c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
126060322 | pes2o/s2orc | v3-fos-license | High Dose of FGF-2 Induced Growth Retardation via ERK1/2 De-phosphorylation in Bone Marrow-derived Mesenchymal Stem Cells
Fibroblast growth factor (FGF)-2 is one of the most effective growth factors to increase the growth rate of mesenchymal stem cells (MSCs). Previously, we reported that low dose of FGF-2 (1 ng/ml) induced proliferation of bone marrow-derived mesenchymal stem cells (BMSCs) through AKT and ERK activation resulting in reduction of autophagy and senescence, but not at a high dose. In this study, we investigated the effects of high dose FGF-2 (10 ng/ml) on proliferation, autophagy and senescence of BMSCs for long term cultures (i.e., 2 months). FGF-2 increased the growth rate of BMSCs in a dose dependent manner for a short term (3 days), while during long term cultures (2 months), population doubling time was increased and accumulated cell number was lower than control in BMSCs when cultured with 10 ng/ml of FGF-2. 10 ng/ml of FGF-2 induced immediate de-phosphorylation of ERK1/2, expression of LC3-II, and increase of senescence associated β-galactosidase (SA-β-Gal, senescence marker) expression. In conclusion, we showed that 10 ng/ ml of FGF-2 was inadequate for ex vivo expansion of BMSCs because 10 ng/ml of FGF-2 induced growth retardation via ERK1/2 de-phosphorylation of autophagy senescence in
INTRODUCTION
Mesenchymal stem cells (MSCs) have been used for cell-based tissue engineering and regenerative medicine due to their capacity of self-renewal, multi-lineage differentiation (Pittenger et al., 1999;Jiang et al., 2002;Schwartz et al., 2002), and their therapeutic properties, such as migration to the site of damage (Caplan, 1991;Prockop, 1997), expression of trophic factors, and immunosuppressive potential (Kwon et al., 2006;Prockop and Olson, 2007). Although MSCs can be expanded for clinical use in a relatively short time period (Colter et al., 2000;Sekiya et al., 2002), it has been reported that the proliferation rate and differentiation ability of MSCs are gradually decreased and MSCs become easily senescent during serial passages (Digirolamo et al., 1999;Ksiazek, 2009). Thus, to obtain large number of MSCs, many investigators have extensively examined optimal culture parameters, including basal medium, glucose concentration, stable glutamine, mononuclear cell plating density, MSC passaging density, plastic surface quality, and growth factors (Colter et al., 2000;Lee et al., 2013;Sekiya et al., Original Article 2002; Yang et al., 2015). Moreover, it is important to establish in vitro culture conditions that maintain the stemness, which can be defined by their potential to proliferate and differentiate and known to decrease during serial passage gradually.
To increase the growth rates of MSCs, fibroblast growth factor (FGF)-2, platelet-derived growth factor (PDGF), epidermal growth factor (EGF), and vascular endothelial growth factor (VEGF) are often used to obtain large number of MSCs in ex vivo expansion (Zaragosi et al., 2006;Larghero et al., 2008;Tarte et al., 2010). Among these factors, FGF-2 is the most common growth supplement used in MSC culture media.
FGFs belong to a family of heparin-binding growth factors that are known to regulate proliferation, migration, survival, and differentiation in many different cell types and tissues (Oh and Eom, 2016;Okada-Ban et al., 2000;Eswarakumar et al., 2005). FGFs have a heparin-binding site and interaction with heparin-like molecules is necessary for their stable interaction with FGFRs and local signaling (Goetz et al., 2007). In humans, 22 members of the FGFs family have been identified. FGFs are secreted during the healing process of fractures or in sites of bone surgery, implying that FGFs are an important factor in bone development and regeneration (Bolander, 1992). In stem cells such as hematopoietic, mesenchymal, neural, and embryonic stem cells, FGFs regulate the self-renewal, maintenance, and proliferation (Craig et al., 1996;Quito et al., 1996;Gritti et al., 1999;Yeoh and de Haan, 2007). FGFs bind tyrosine-kinase receptors, FGF receptors (FGFR) 1-4 and then activate mainly two signaling cascades downstream of the FGFRs to stimulate proliferation or survival. One is the Ras-Raf-mitogenactivated protein kinase (MAPK) proliferation pathway (Kouhara et al., 1997). In parallel, the other is phosphatidylinositol-3-kinase (PI3K)-Akt cell survival pathway (Kouhara et al., 1997;Schlessinger, 2004).
Previously, we reported that FGF-2 stimulated proliferation via AKT and ERK activation and suppressed autophagy and senescence in long-term culture of BMSCs . Although FGF-2 increased proliferation potential in a dose dependent manner during short term cultures approximately 3 days or early passage, growth rate decreased rapidly by high dose of FGF-2 treatment during long-term culture.
In this study, we showed that high dose of FGF-2 (>5 ng/ ml) decreased growth rate at late passage by suppression of ERK signaling.
Cell culture
The Institutional Review Board of Yonsei University Wonju College of Medicine approved this study. Human bone marrows (BM) of three healthy donors (age 21~40 years) were obtained with informed consent from Pharmicell Co., Ltd. (Sungnam, Korea). BMSCs were isolated and cultured as previously described (Jang et al., 2014;Bae et al., 2015). Briefly, mononuclear cells from BM aspirates were isolated by density-gradient centrifugation (Histopaque- and cultured at 37℃ in a 5% CO 2 atmosphere. After 5 days, the medium was changed to remove non-adherent cells. The medium was changed twice weekly and cells were passaged when cells reached 90% confluence (passage 0). Expanded cells were cryopreserved at passage 1. To perform experiments, cryopreserved cells were thawed, expanded to one more passage, and then these cells were used for this study.
Population doubling time (PDT) was determined by dividing the total number of hours in culture by the number of doublings. To evaluate the effects of FGF-2 (R&D) on proliferation potential, autophagy, and senescence, BMSCs were cultured with 1 or 10 ng/ml of FGF-2 during serial passage for two months.
Immunoblotting
The BMSCs were cultured with FGF-2 during the in-
Proliferation of BMSCs by FGF-2
Previously, we reported that 1 ng/ml of FGF-2 can increase Fig. 1. Proliferation potential of BMSCs treated with FGF-2. A) Relative increase of growth by treatment with FGF-2 (0~20 ng/ml) for 3 days. B) Accumulated cell number by treatment with 1 or 10 ng/ml of FGF-2 during serial passage for 2 months. C) Population doubling times of BMSCs during serial passage for 2 months. Results are expressed as mean ± SD. the proliferation of BMSCs, but we observed a significant decrease in proliferation rate when 10 ng/ml of FGF-2 was used for culturing BMSCs for longs periods . In order to determine the optimal concentration of FGF-2 for BMSC culture, cells were treated with different concentrations of FGF-2 for 3 days. FGF-2 increased the proliferation of BMSCs in a dose dependent manner up to 10 ng/ml concentration, but decreased the proliferation rate at 20 ng/ml concentration (Fig. 1A). To determine the changes in population doubling time (PDT) and the total accumulated number of cells that could be obtained by 2 months culture, 1 ng/ml or 10 ng/ml of FGF-2 were treated for 2 months in a BMSC culture. Unexpectedly, the prolif-eration rate, subculture number, and accumulated cell number of BMSCs was significantly decreased in 10 ng/ml of FGF-2-treated culture (Fig. 1B). Moreover, PDT increased from 32 hours in the first subculture to 104 hours in the fourth subculture (Fig. 1C). On the other hand, 1 ng/ml of FGF-2 showed increased proliferation (Fig. 1A), relatively short PDT (Fig. 1B), and approximately 100 times more cells than the control group (Fig. 1C). These results suggest that the use of high concentrations of FGF-2 for BMSC culture is inappropriate.
Proliferation signals activated by FGF-2 in BMSCs
Previously, we observed that 1 ng/ml of FGF-2 promoted Fig. 2. Proliferation signals activated by FGF-2 in BMSCs. BMSCs were treated with FGF-2 in a dose (at 0~20 ng/ml for 3 days) and time (for 0~3 days at 10 ng/ml) dependent manners. A) Activation of AKT and ERK by FGF-2 at different concentrations (0.5~20 ng/ml) for 3 days. B) Activation of AKT and ERK for different time points by 10 ng/ml of FGF-2. C) Inhibitory effects of LY294002 on growth of BMSCs. 10 ng/ml of FGF-2-dependent growth increase was reversed by inhibitors of AKT in a dose-dependent manner. Results are expressed as mean ± SD. proliferation of BMSCs by activating AKT and ERK . BMSCs were treated with FGF-2 at different concentrations (0.5~20 ng/ml) for 3 days to confirm the phosphorylation of AKT and ERK. ERK were phosphorylated only at a low concentration of 0.5~1 ng/ml, but it was observed that the phosphorylation of ERK was less than that of the control group at concentrations greater than 1 ng/ml ( Fig. 2A). In order to analyze the change of phosphorylation of AKT and ERK up to 3 days after FGF-2 treatment, 10 ng/ml of FGF-2 was treated. Phosphorylation of AKT was observed for 5 to 30 min, but phosphorylation of ERK was not observed at all and was not even phosphorylated than in the control group (Fig. 2B). When LY294002, an inhibitor of PI3K, which can inhibit AKT activity, was treated with 10 ng/ml of FGF-2, the proliferation of BMSCs decreased in a dose dependent manner (Fig. 2C). These results suggest that 10 ng/ml of FGF-2 may induce BMSC proliferation through the activation of AKT but it does not
Autophagy and senescence of BMSCs by FGF-2
Previously, we reported that 1 ng/ml of FGF-2 promoted the proliferation of BMSCs and alleviated autophagy and cellular senescence . In order to investigate changes in autophagy and cellular senescence by 10 ng/ml of FGF-2, 10 ng/ml of FGF-2 was continuously treated with BMSC culture. The expression of LC3-II used as a marker of autophagy was gradually increased in the control group, but no significant change in expression was observed at 1 ng/ml of FGF-2. On the other hand, in BMSCs treated with 10 ng/ml of FGF-2, LC3-II expression was slightly increased compared to the control group (Fig. 3A). In cellular senescence, 1 ng/ml of FGF-2 reduced cellular senescence, but 10 ng/ml of FGF-2 promoted senescence more so than the control group (Fig. 3B). These results suggest that 10 ng/ml of FGF-2 may inhibit BMSC proliferation by inducing autophagy and senescence of BMSCs.
DISCUSSION
FGF-2 was able to stimulate proliferation of BMSCs in a dose dependent manner, but continuous prolonged highdose treatment significantly slowed proliferation rate. FGF-2 at 10 ng/ml activated AKT, but ERK promoted dephosphorylation. In addition, 10 ng/ml FGF-2 increased autophagy and increased cellular senescence. In order to use MSCs in clinical studies for the treatment of intractable diseases, a large number of cells are required. Although MSCs have been reported to be able to proliferate to a sufficient number of cells for a relatively short period of time (Colter et al., 2000;Sekiya et al., 2002), MSCs have been reported to slowly decrease their proliferative capacity during prolonged cultures . FGF-2, FGF-4, and stromalderived factor (SDF) -1 is known to modulate the proliferative capacity of MSCs (Quito et al., 1996;Solchaga et al., 2005). Indeed, these growth factors have been used for the cultivation of many primary cells, including mesenchymal stem cells. However, as the results of this study indicate, it should be considered that the use of a growth factor concentration above a certain level in the culture of MSCs may deteriorate the proliferative capacity.
Since several growth factors (FGF-2, EGF, TGF-beta, and HGF) have been reported to regulate the differentiation potential of MSCs (Sanchez and Fabregat, 2010). In fact, long-term treatment with 1 ng/ml of FGF-2 resulted in a decrease of the differentiation potentials and the expression of CD73 and CD90, the cell surface antigens of MSCs, and a marked increase in the expression of HLA-DR (data not shown). Therefore, when growth factors are used to increase the proliferative capacity of MSCs, the types and concentrations of growth factors that influence on differentiation potentials and expression of cell surface antigens in MSCs must be evaluated. In addition, autophagy and cellular senescence gradually increase during culture . Therefore, when growth factors are used to increase the proliferative capacity, the effect of growth factors on autophagy and senescence of MSCs must be analyzed as well. In conclusion, although we can secure the number of MSCs within a short period of time using growth factors, because growth factors can regulate differentiation, autophagy, aging, and cell surface antigen expression in MSCs, the type and amount of growth factor used in the culture should be strictly determined. | 2019-04-22T13:07:07.092Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "74b3c22cea84daa73c2597a4b85bf557c5322a1d",
"oa_license": "CCBYNC",
"oa_url": "http://www.bslonline.org/journal/download_pdf.php?doi=10.15616/BSL.2017.23.2.49",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1cdb1c7f5a9d4e947fcb30301c0cfcabf965433a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
236550395 | pes2o/s2orc | v3-fos-license | A Deep Learning Sentiment Analyser for Social Media Comments in Low-Resource Languages
: During the pandemic, when people needed to physically distance, social media platforms have been one of the outlets where people expressed their opinions, thoughts, sentiments, and emotions regarding the pandemic situation. The core object of this research study is the sentiment analysis of peoples’ opinions expressed on Facebook regarding the current pandemic situation in low-resource languages. To do this, we have created a large-scale dataset comprising of 10,742 manually classified comments in the Albanian language. Furthermore, in this paper we report our efforts on the design and development of a sentiment analyser that relies on deep learning. As a result, we report the experimental findings obtained from our proposed sentiment analyser using various classifier models with static and contextualized word embeddings, that is, fastText and BERT, trained and validated on our collected and curated dataset. Specifically, the findings reveal that combining the BiLSTM with an attention mechanism achieved the highest performance on our sentiment analysis task, with an F1 score of 72.09%.
Introduction
Currently, the world is facing the challenges posed by the COVID-19 pandemic [1]. In the last few months, due to these changes, almost the entire population of the world has been affected in terms of their day to day operations. Nowadays, people are working, studying, shopping and socializing from a distance. The need for physical distancing has also affected peoples' emotions and their expression. Social media platforms became one of the main outlets on which people express, among others, their thoughts, sentiments, emotions and so forth, regarding the pandemic situation. Recent studies also show that social media has been one of the main channels for misinformation [2], especially during the ongoing pandemic crisis. Besides this, social media channels were considered and used by relevant Public Health Authorities for the distribution of information to the wider public [3]. Kosova, as a young country, has been following these trends. The National Institute of Public Health of Kosova has been utilizing their Facebook page to disseminate information and recommendations daily regarding the pandemic situation. These posts have created much engagement with local populations in terms of impressions and comments where general public shared their thoughts and emotions, as well as their sentiments regarding the ongoing pandemic that the country was/is going through. The social engagement on Facebook around the Public Health Institute posts created a rich and diverse set of data that captured quite well the overall public discourse and sentiments around the COVID-19 pandemic in the Albanian language. Sentiment analysis within academia is defined as a computational examination of end user opinions, attitudes and emotions expressed towards a particular topic or event [4]. Sentiment analysis systems use various learning approaches to detect sentiment from textual data including lexicon-based [5], machine/deep learning [6,7], a combination of lexicon and machine learning [8], concept-based learning approaches [9,10], and so forth. Sentiment analysis became an important field of research for machine learning applications. Predominately, social media sentiment analysis has been one of the main field of research especially during the current COVID-19 pandemic [1]. In these studies, the prime focus has been on assessing public sentiment analysis in order to gain insights for making appropriate public health responses. Besides this, other areas where sentiment analysis is applied include, among others, election predictions [11], financial markets [12], students' reviews [13,14], and so forth, just to name a few. A common denominator of all these cases across diverse application areas shows that sentiment analysis is a valuable tool to provide accurate insights into general public opinions about particular topics of interest. The application of the sentiment analysis is closely related to the availability of the data sets that are usually related to high-resource languages. In our case of analysis, we were dealing with a relatively low-resource language, the Albanian language. Having in mind the possibility that sentiment analysis could provide insights into peoples' opinions during the pandemic and the measures taken for its prevention. Motivated by this, we designed this study, in which we created a data set and evaluated different deep learning models and conventional machine learning algorithms in a low-resource language such as the Albanian language. The main contributions of this article are as follows: • The collection of a large-scale dataset composed of 10,742 manually classified Facebook comments related to the COVID-19 pandemic. To the best of our knowledge, this is the first study that performed sentiment analysis of Facebook comments in a low-resource language such as the Albanian language. • A deep learning based sentiment analyser called ALBANA is proposed and validated on the collected and curated COVID-19 dataset. • An attention mechanism is used to characterize the word level interactions within a local and global context to capture the semantic meaning of words.
The rest of the paper is structured as follows: Section 2 presents some related work, especially from the context of sentiment analysis in the Albanian language. In Section 3, we describe the methodology used to conduct the research. A description of the dataset and classifier models used to conduct the experiments is provided in Section 4. Section 5 depicts experimental results followed by their discussion presented in Section 6. The paper concludes with some future directions presented in Section 7.
Related Work
Albanian is spoken by over 10 million speakers, as the official language of Kosovo and Albania, but is also one of two official languages in North Macedonia, and is spoken by the Albanian community in the Balkans and the region, as well as among the large Albanians' migrants community residing mainly in European countries, America and Oceania. Albanian is an Indo-European language, an independent branch of its own, featuring lexical peculiarities distinguishable from other languages [15].
Considering the advancement of research in Natural Language Processing (NLP) and in sentiment analysis in particular for many world-spoken languages, NLP and sentiment analysis research on the Albanian language stands behind even some other low resource languages.
Sentiment analysis research for the English language has achieved significant results already, and advanced to not only adapting the latest theories in the areas of lexical analysis and machine learning (ML), but also at application level [6,7,[16][17][18][19].
In [16], the literature on sentiment analysis using different ML techniques over social media data to predict epidemics and outbreaks, or for other application domains is surveyed. ML, linguistic-based and hybrid approaches for sentiment analysis are compared. ML approaches take precedence over linguistic-based except for short sentences. Classical ML techniques such as SVM, Naive Bayes, Logistic Regression, Random Forest and Decision Trees are shown as most accurate, each for certain dataset and domain.
In [17], in an analysis about COVID-19 on 85.04M tweets from 182 countries during March to June 2020, the distribution of sentiments was found to vary over time and country, uncovering thus the public perception of emerging policies such as social distancing and remote work. Authors conclude that social media analysis for other platforms and languages is critical towards identifying misinformation and online discourse.
In [18], Facebook pages of Public Health Authorities (PHAs) and the public response in Singapore, the United States, and England from January 2019 to March 2020 are analyzed in terms of the outreach effects. Among metrics measured are mean posts per day ranging 1.4 to 5, mean comments per post ranging 12.5 to 255.3, mean sentiment polarity, positive to negative sentiments ratio ranging 0.55 to 0.94, and toxicity in comments which turned to be rare across all PHAs.
In [6], the authors seek to understand the usefulness/harm of tweets by identifying sentiments and opinions in themes of serious concerns like pandemics. The proposed model for sentiment analysis uses deep learning classifiers with accuracy up to 81%. Another proposed model bases on fuzzy logic and is implemented by SVM with an accuracy of 79%.
In [19], sentiment analysis of Tweets about Coronavirus using Naive Bayes and Logistic Regression is presented. Tweets of varying lengths, that is, less than 77 characters (small to medium) and less than 120 characters (longer) are analyzed separately. Naive Bayes performed better on classifying small to medium size Coronavirus Tweets sentiments with an accuracy of 91%. For longer Tweets, both methods showed weak performance with an accuracy not over 57%.
The reaction of people from different cultures to the Coronavirus expressed on social media and their attitudes about the actions taken by different countries is analyzed in [7]. Tweets related to COVID-19 were collected for six neighboring countries with similar cultures and circumstances including Pakistan, India, Norway, Sweden, USA and Canada. Three different deep learning models including DNN, LSTM, and CNN along with three general-purpose word embedding techniques, namely fastText, GloVE and GloVe for twitter, were employed for sentiment analysis. The best performance with an F1-score of 82.4% was achieved by LSTM with fastText.
Work on other languages concerning sentiment analysis is also growing, such as on German [20,21], Swedish [22,23], or multilingual social media posts [24]. A detailed description of the past and recent advancements on multilingual sentiment analysis conducted on both formal and informal languages used on online social platforms is explored in the survey conducted by Lo et al. in [25].
There are only a few works on sentiment analysis (opinion mining) in the Albanian language [26][27][28], as well as few related to sentiment analysis on emotion detection in the Albanian language [29,30].
In [26], an ML model is developed to classify documents as having positive or negative sentiment. The corpora built to develop the model consists of 400 documents covering five different topics, each topic represented by 80 documents tagged evenly with positive and negative sentiment. Six different ML algorithms, namely Bayesian Logistic Regression, Logistic Regression, SVM, Voted Perceptron, Naive Bayes and Hyper Pipes are used for classification, performing with 86% to 92% accuracy depending on the topic. The whole corpora being political news articles is characterized by a complex language and very rich technical vocabulary. The paper concludes that a larger dataset in the Albanian language is needed to achieve a high performance sentiment classifier.
In [27], a comprehensive selection of ML algorithms is evaluated for opinion mining in the Albanian language, resulting to five best performing algorithms, Logistic and Multi Class Classifier, Hyper Pipes, RBF Classifier, and RBF Network with 79% to 94% of correctly classified instances. The opinions are classified as positive or negative. The classification model is developed over a corpus of 500 newspaper articles in Albanian covering 5 different subjects, each with a balanced set of articles with positive and negative opinions. The results varied also from subject to subject. This research is later extended from an in-domain corpus to multi-domains corpuses combining opinions from 5 different topics [28]. All the corpuses are used to train and test for opinion mining the performance of 50 classification algorithms implemented in Weka. Algorithms perform better in in-domain corpus than in multidomain corpus. As authors state, a bigger corpus in the Albanian language could provide a clearer picture on the performance of classification algorithms for opinion mining.
In [29], a CNN sentence-based classifier is developed to classify a given text fragment into one out of six pre-defined emotion classes based on Ekman' model: joy, fear, disgust, anger, shame and sadness. Experimental evaluation shows that a deep learning model (CNN) with classification accuracy of emotions ranging from 67% to 92.4% in overall outperforms three classical classification algorithms, Naive Bayes (NB), Instance-based learner (IBK), and Support Vector Machines (SMO). Findings related to the impact of the length of text on classification are also presented. The stemming of text prior to classification improves the accuracy. Another contribution is the corpus built-some 6000 posts by politicians on Facebook in the Albanian language-to develop the model. Further, in [30], the authors extend their framework with clustering to extract representative sets of generic words for a given emotion class. The authors list deep neural network architectures that take the sequential nature of text data into account, such as LSTM, as worth considering for an emotion detection model in the future.
Our approach follows the rationale that a larger dataset is a prerequisite for developing a model that is not prone to overfitting. Recalling the classification results in the related work mentioned above, there is a huge variation in accuracy between distinct sub-datasets of a size of merely some hundreds of tagged data. Moreover, the sequential ordering of text is to be learned from, and hence considering the usage of deep neural networks for the model as well as NLP based representation techniques, that is, static and contextualized word embeddings, is unavoidable. Multi-class classification into positive, neutral, and negative sentiments is also of interest to validate the applicability of our approach not only to sentiment analysis, but also to other domains like detecting emotions or other multi-class text mining tasks (e.g., review of items in the scale 1 to 10) in the Albanian Language.
Methodology
The research was carried out using a quantitative research method and it was comprised of five phases including the first two that constitute human-related tasks and the remaining three phases in which a machine is involved. More specifically, the first phase entailed collecting users' posts on Facebook from the day when the first few cases were reported (13 March until 15 August). The second phase was constituted of the labeling of collected posts. A manual labeling process has taken place where three human annotators assessed the attitudes and opinions of users expressed in Facebook posts and properly classified them to either positive, neutral or negative categories.
In the third phase, a text pre-processing was performed to remove punctuation, words with length less or equal to two characters, and words that are not purely constructed of alphabetical characters from users' comments. Additionally, all text comments were converted to lowercase.
The fourth phase involved a representation model to prepare and transform the posts to an appropriate numerical format to be fed into the sentiment classifiers. A bag of word representation model with its implementations, term frequency inverse document frequency-t f * id f was employed. Furthermore, we used a representation model that generated dense vector representations for words occurring in comments known as word embeddings. A static pre-trained word embedding method called fastText along with a contextualized word embeddings model-BERT, were used to learn and generate word vectors.
The final phase is constituted of the sentiment analyser, which aims to predict the sentiment of each users' comment into one of the three categories, namely positive, neutral or negative. The analyser involves several classifiers including deep neural networks as well as conventional machine learning algorithms for sentiment orientation detection.
A high level architecture of the proposed ALBANA analyser involving all the phases elaborated above is illustrated in Figure 1.
Experiments
This section describes the data collection and annotation procedure applied to creating the dataset as well as the classifier models used to conduct the sentiment classification task.
Dataset
The dataset consists of people's opinions expressed towards daily Facebook posts of the National Institute of Public Health of Kosova (NIPHK) (https://www.facebook.com/ IKSHPK, accessed on 5 April 2021) regarding the spread of the COVID-19 virus in the Republic of Kosova. Dataset creation involved data collection and annotation, which are described in the following subsections.
Dataset Collection
We collected comments from the official Facebook page of the NIPHK Institute for a period of 6 months, from 13 March till 15 August 2020. 13th of March marked the confirmation of first cases of COVID-19 in Kosova. To retrieve comments, we used Com-mentExporter (http://www.commentexporter.com, accessed on 10 December 2020). This is a tool that allows to export the original comments to an Excel file. The open source version of this tool is limited to a maximum of 300 comments to export in one usage excluding replied comments. Due to this limitation, there are few days (e.g., 27 July) during this 6 months time period that are not included as the number of comments was over 300. Additionally, there are few days (e.g., 15 March) that are also missing because there was no official announcement (no post) from the NIPHK Institute. The total number of collected comments is 10,742 and this constitute the first version of the dataset referred as version 1.0. Dataset 1.0 contains the unlabeled comments and some metadata information as illustrated in Figure 2.
There were two other following versions until the final dataset was created. The second version was updated by adding two new manually extracted features related to the post: post's timestamp and the URL of the post. In the third version, we added three more post-related features including number of deaths, number of infected persons and number of healed persons for the day when the post was published. These three features were also manually extracted from the content of each post. The third version of the dataset evolved to the final dataset by labeling the comments from human annotators. In order to avoid the bias and make the labeling process more objective, all comments were annotated by three human annotators who were third year bachelor students at Computer Engineering department in University of Prishtina. Then, a majority voting was applied to get the final sentiment label for each comment. The interannotator agreement determined by computing the Pearson's correlation between the scores given by each annotator is depicted in Table 1. The correlation coefficients indicate a strong agreement between annotator 1 and annotators 2 and 3, whereas a moderate agreement between annotator 2 and annotator 3. Table 2 shows few examples labeled as neutral, positive and negative for which a perfect agreement among the three annotators is achieved. Table 2. Examples of comments annotated with a perfect agreement among annotators.
Comment (English Translation) Sentiment
Do te thot Peja edhe sonte spaska asnje rast (It means that even tonight Peja does not have any case) Neutral Bravo ekipet e IKShP per punen e shkelqyeshme dhe perkushtimin! (Well done the NIPHK teams for the great job and dedication!) Positive Keni kalu tash ne monotoni, te pa arsyshem jeni tash. (You have now passed into monotony, you are now unreasonable.) Negative The most challenging part when it comes to comment labeling was the assignment of the sentiment to comments expressing people's opinions on various topics/entities. The sentiment was assigned to the entire comment and no analysis of entities/sentences in the comment was carried out. For example, the comment "Comment No 894: Juve stafit mjeksor respekt ndersa ktyre qe jane raste kontakti qe nuk kane nejt ne shtepi po kane shku musafir e jone infektu turp! (Respect to the medical staff while shame on contact cases who have not stayed at home but have visited relatives and got infected!)" has a positive initial sentence and expresses positive sentiment towards the medical staff whereas the second part of the comment expresses negative sentiment towards contact cases who are not isolated and got infected. This comment contains both positive and negative sentiments and it was typically annotated differently by human annotators.
Another challenging aspect for the annotators was the labeling of comments comprising figurative language such as sarcasm and irony. Figurative language is very contextual, environmental, and topic reliant, and this caused difficulties for the annotators to find people's actual sentiment expressed in the comment, and as a result, the given comment might have been annotated differently by annotators. For instance, the comment is a sarcastic expression that without any contextual clues might have been understood and annotated differently by annotators.
The final dataset, whose screen shot is illustrated in Figure 3, contains 13 attributes that are described in the following:
Dataset Statistics
As described in the previous section, the curated dataset contains three classes that, along with the users' comments, assigned to them are depicted in Table 3. The distribution statics show that the dataset is highly imbalanced with the neutral sentiment class comprising more than half of the comments (56.4%), followed by negative comments and positive comments with 28.0% and 15.6%, respectively. It is also interesting to note that the dataset contains comments of various length. The shortest comment is composed of 1 word and the longest comprises 212 words. The average length of comments from the entire corpus is 16.01 words. The length variation of comments constituting our dataset is depicted in Figure 4, where the histogram diagram illustrates the number of words per comment distributed by sentiments. As can been seen in Figure 4, the negative comments are generally the longest, with an average length of 22.68 words per comment. More specifically, it is only one comment in negative class which is below the average length of comments in the entire corpus. On the other side, the neutral and positive comments seem to be shorter, with an average length of 13.65 and 12.59 words per comment, respectively. Figure 5 depicts the number of comments distributed across the months. As can be seen from the graph, there was an increasing trend of comments related to COVID-19 disease during April and July. This trend is seen for all three types of comments (neutral, positive and negative). It is also very interesting to note that during June and July there is a significant increase in negative comments. An explanation for this is that the first wave of the COVID-19 pandemic hit Kosova during this time with new cases and the death toll growing rapidly.
Deep Neural Networks
To identify the opinion orientation of users towards the COVID-19 pandemic expressed on Facebook comments, we employed three different deep neural networks, namely 1D-CNN, BiLSTM, and a hybrid 1D-CNN + BiLSTM model, as depicted in Figure 6. We have chosen these networks due to the different nature in their text modeling capabilities. More specifically, 1D-CNN has the ability to extract local features from the comment, BiLSTM is good at capturing contextual information from both direction as well as the long-range dependencies, and the hybrid model that takes the advantages of both complementary 1D-CNN and BiLSTM architectures. The architecture of 1D-CNN consists of an input, an output and 5 hidden layers, as shown in Figure 6a. The input layer takes a textual comment padded to a fixed length of 20 words, followed by an embedding layer comprising word embeddings of size 300D. Next comes an attention layer that aims to extract high level feature vectors. The attention layer is a sub-unit comprised of context vectors that align the source input denoted by x 1 , x 2 ,...,x n , and target output indicated by y 1 , y 2 ,...,y n . An illustration of the attention mechanism is shown on the top right corner in Figure 6a. Feature vectors extracted from the attention layer serve as inputs to the SpatialDropout1D layer. A Conv1D layer (bottom right) with 512 1D convolution filters of size 3 and a ReLU activation function is applied on top of the dropout layer. Finally, a fully-connected dense layer composed of a so f tmax function and 3 units is used to compute the probability distribution over three sentiment orientations (positive, neutral, negative).
The second network applied to detect the opinion orientation of Facebook users is a BiLSTM architecture, as illustrated in Figure 6b. This network architecture is slightly different from the one shown in Figure 6a, where Conv1D and GlobalMax layers are replaced with BiLSTM and Flatten layers, respectively. Similar to 1D-CNN, an illustration of BiLSTM architecture and the attention mechanism is shown in the right side of Figure 6b.
The third network architecture, illustrated in Figure 6c, constitutes a hybrid model that takes the advantages of the two complementary deep neural models, 1D-CNN and BiLSTM, and the attention mechanism by combining them into a single unified architecture. Specifically, 1D-CNN layer will be applied on top of the embedding layer to capture local features such as n-grams. These features will serve as the inputs to the BiLSTM layer which will be used to model the contextual information from both directions (backward and forward), and capture the long-range dependencies in the comment. Then, an attention layer was applied to the outputs of BiLSTM to capture important information by assigning different weights to different words in the comment. Finally, an output layer proceeded by a dense layer used to map extracted features to a more separable space is applied.
Conventional Machine Learning Models
This section briefly discusses the conventional machine learning models employed in this study for sentiment classification. The models include Support Vector Machine (SVM), Naive Bayes (NB), Decision Tree (DT), and Random Forest (RF).
SVM is a classifier, which can be either a parametric or non-parametric model depending on the linearity property. Linear SVM is parametric as it contains a fixed size of parameters derived by the weight coefficient whereas non-linear SVM can be considered as non-parametric due to the kernel matrix that is created by calculating the pair-wise distances of two feature vectors.
NB is a parametric classifier that applies a statistical-based model to learn the underling function from the training data. The learning function is characterized by a fixed number of parameters defined by the Bayes rule. As indicated by the name, the 'naive' part of this classifier comes from the assumption of strong independence of features and the class variables.
DT is a non-parametric classifier that does not use any parameters to learn the underlying probability density function. It employs a tree structure model to perform the classification and it uses the information provided by training samples alone. Overfitting is the major limitation of DT and to overcome this limitation, multi-classifiers systems such as Random Forest have emerged.
RF belongs to the family of multi-classifier systems as it combines multiple decision trees into a single unique architecture. In contrast to single decision tree classifiers which can not handle the noise properly, RF is robust to noise and outliers due to its randomness property which reduces the variance between the different decision trees.
Results
This section provides the experimental results obtained from various sentiment classifiers trained and validated on our collected dataset.
Parameter Settings
All deep neural networks employed for sentiment classification were implemented using Python open-source software library called Keras (https://keras.io, accessed on 5 April 2021). Scikit-learn (https://scikit-learn.org/stable/, accessed on 2 April 2021), a simple and efficient tool in Python, is used for developing conventional machine learning algorithms. BERT model is implemented in the Peltarion (https://peltarion.com/platform, accessed on 7 April 2021) operational AI platform. The maximum number of words to be used in the tokenizer model was set to 20,000 and the input comment sequence is padded to 20 words.
The following hyper-parameters were used to conduct the experiments. The training batch size of 256, Adam stochastic optimizer with the learning of 0.001, categorical crossentropy was used as the loss function, and an accuracy metric to detect the convergence of the models. The number of epochs used to train and validate the model was set to 15. In order to avoid overfitting in our deep neural networks, we used a dropout strategy where certain units (neurons) along with their incoming and outgoing connections were temporarily removed from the network models. Dropout prevents model units from co-adapting too much on the training data and thus it led to better generalization on the testing set as well [31] . In our case, the dropout rate was set to 0.3.
The dataset was split into three sets: 70% for training, 15% for validation and the remaining 15% for testing.
Our Baseline Model
First, we established a baseline model which is a simple deep neural network (DNN) with an architecture similar to the one reported in the research work in [7]. Specifically, the DNN architecture shown in Figure 7 consists of an embedding layer with 300 dimensions, a GlobalMaxPooling layer, three dense layers with 128, 64, 32 units and ReLU as an activation function, and the output layer using 3 units and a so f tmax function. The sentiment classification performance obtained from the baseline model in terms of precision, recall, and F1 score, is given in Table 4.
Deep Neural Networks
In the next set of experiments, we have conducted sentiment analysis using the deep neural networks described in Section 4.2.
Attention Mechanism
In this section, we examine the effect of the attention mechanism on capturing the long range dependencies in the collected comments. For this purpose, an attention layer considering a global and local context is used on top of BiLSTM to extract the high-level features. Global context characterizes the entire comments and it is too broad. Local context is defined from a small window of different sizes. In our case, we tested local context of various sizes, from 2 up to 10 words, as illustrated in Figure 8. It is worth noting that the classification performance increases by increasing the context width. The increasing trend continues up to a window size of 8 when the highest performance is achieved. The performance gradually degrades as we continue to increase the window size more than 8 words. Therefore, we chose the window size of 8 words as the optimal context to extract semantic features using the attention mechanism when we conducted the rest of the experiments on sentiment classification task.
1D-CNN w/o Attention Mechanism
Next, we investigated the effect of integrating attention mechanism into a one dimensional convolution neural network (1D-CNN). The network integrates the attention layer to obtain high-level features of the reviews to train the sentiment classification model. To show the benefit of this mechanism we report side by side the results exhibited by 1D-CNN with and without attention in Table 5. The results show that 1D-CNN with attention (1D-CNN + Att) generally outperforms the 1D-CNN model in sentiment classification achieving an F1 score of 71.56%. It is interesting to notice that a more substantial improvement is achieved by 1D-CNN + Att model on a positive class where the F1 score is increased from 63.85% to 67.16%.
BiLSTM w/o Attention Mechanism
This section focuses on examining the performance of BiLSTM in sentiment classification task. Specifically, we conducted experiments with two different classification settings in terms of the network architecture used. The first setting consists of an BiLSTM architecture with an embedding layer and a dense layer. This network architecture extended with an attention layer integrated on top of BiLSTM constitutes the second classification setting. The obtained results of both architectures with respect to precision, recall and F1 score are summarized in Table 6.
Hybrid Model w/o Attention Mechanism
Next, we investigated the effect of a hybrid model where two complementary deep networks, 1D-CNN and BiLSTM, were combined into one unified architecture for sentiment classification. Like two architectures described in Sections 5.3.2 and 5.3.3, the same classification settings with respect to using the attention mechanism were explored. Performance of the hybrid model achieved by exploiting or not the attention mechanism is summarized in Table 7. Results show that the best overall and class-wise performance was achieved by the model on sentiment classification task when the attention mechanism was applied.
Static Word Embeddings
In this section, we analyze the effect of general-purpose pre-trained word embeddings on the sentiment classification task. More specifically, we used 300d pre-trained word vectors trained on the free online encyclopedia Wikipedia and data from the common crawl projects in the Albanian language using fastText model. These vectors were fed to the four different neural networks, namely DNN, 1D-CNN, BiLSTM, and Hybrid model. The results summarized in Table 8 show that 1D-CNN with an F1 score of 70.45% achieved the best classification performance followed by BiLSTM and Hybrid model with an F1 score of 68.95% and 68.18%, respectively. Overall, the results are slightly worse compared to the ones shown in Tables 5-7 and one possible explanation for this is the out of vocabulary issue. Even though fastText handles this problem to some extent by taking a character n-gram level representation, we still have a high number of null word embeddings, 7852 out of 22,859 unique tokens. FastText is trained on documents that are generally written using a standard language whereas our dataset consists of Facebook posts which are written without any regard to the general-rules and standards of the Albanian language, that is, spelling mistakes, and also contain non-standard words and phrases such as slang abbreviations, emojis, and so forth.
Contextualized Word Embeddings
In the same way as in Section 5.3.5, we investigated the effect of contextualized word embeddings on the sentiment classification task. In particular, we employed mBERT model as illustrated in Figure 9. The model architecture (https://tinyurl.com/3c2vk2zf, accessed on 7 April 2021) comprises an input layer representing textual comments coming into the BERT tokenizer layer that converted them into tokens. A sequence of 128 tokens was then fed to the Multilingual BERT-mBERT encoder layer. The encoder in mBERT is an attentionbased architecture composed of 12 successive transformer layers trained on Wikipedia pages with shared vocabulary across 104 languages including the Albanian language. The output of mBERT layer is a vector that was passed to a dense layer with so f tmax function to predict the comment into one of the three opinion classes, that is, positive, neutral and negative. The class-wise and weighted average performance of the mBERT model with respect to precision, recall and F1-score obtained from our dataset is summarized in Table 9. Figure 10 depicts a confusion matrix arising out of the sentiment classifier using mBERT. A quick glimpse at the confusion matrix shows that a better class-wise performance is achieved when using mBERT model. Specifically, it is the negative sentiment class (minority class), in which 69.20% of the comments are correctly classified using mBERT model compared to 63.9%, 62.44%, 62.48%, and 63.16% of comments that are correctly classified by Baseline, 1D-CNN + Att, BiLSTM + Att, and Hybrid + Att, respectively.
Conventional Machine Learning Models
In the second round of the experiments, we analyzed the performance of conventional machine learning (CML) models using Bag-of-Words (BoW) representation on sentiment classification task. The CML models include four classifiers described in Section 4.3. Two BoW implementations, namely count occurrence tf and term frequency inverse document frequency t f * id f , are employed as feature representations to feed the CML classifiers. Parameter values for all CML classifiers were set to default besides for RF where maximum depth of the tree was set to 200. The obtained results with respect to weighted precision, recall, and F1 score, are summarized in Tables 10 and 11. As can be seen from Tables 10 and 11, a better classification performance is achieved when t f * id f vectorizer was used to generate textual features to be fed to CML classifiers compared to the performance of CML classifiers using features extracted from the count vectorizer. It is also interesting to note that RF outperformed all the CML classifier models in both classification settings, achieving an F1 score of 70.49% and 71.44% using t f and t f * id f , respectively.
A summary of the results for all the classifier models using various embeddings including domain, static and contextualized embeddings, that is, FastText, mBERT, as well as distribution embeddings, that is, t f , t f * id f , is depicted in Table 12. As highlighted in Table 12, BiLSTM with an attention mechanism and word embeddings generated from our collected dataset outperforms the other classifier models, achieving an F1 score of 72.09%.
Discussion
Based on the experimental results provided in Section 5, deep learning models (1D-CNN, BiLSTM, Hybrid, and BERT) generally perform better than conventional machine learning models (SVM, NB, DT, RF). This can be attributed to the capabilities of deep neural networks on modeling textual comments. 1D-CNN is a classifier model which is good at identifying local features in the comments regardless of their position. The model also applies pooling to reduce the output dimensionality and extract the most salient features. On the other hand, BiLSTM is a classification model that has the ability to capture contextual information in both forward and backward directions and to learn long-range dependencies from the comments. Hybrid classifier model takes the advantages of both complementary 1D-CNN and BiLSTM architectures whereas BERT classifier is capable to understand the meaning of each word using a bidirectional strategy and attention mechanism.
It is also interesting to note that the performance of all the deep learning models is improved using the attention mechanism. This mechanism is used to explicitly make the classifiers more robust for understanding the semantic meaning of each word within a local or global context. The empirical data (Figure 8) showed that the local context works better than the global one in our case.
Another interesting fact that can be observed from the experimental results is a better class-wise performance achieved from deep learning classifiers compared to conventional machine learning models. A significant improvement is evident in classes with small numbers of comments. More specifically, the neutral class registered an average F1 score of 62.38% when deep learning classifiers with attention mechanism and domain embeddings are applied for sentiment classification compared to an average F1 score of 55.91% obtained from conventional classifiers with t f * id f distribution embeddings.
Despite better performance of deep learning classifiers on our sentiment classification task, there are still a few advantages to using conventional machine learning models. One advantage is that these models are financially and computationally cheap as they can run on decent CPU and do not require very expensive hardware such as GPU and TPU. Another advantage of conventional classifier models is the interpretability. These models are easy to interpret and understand as they involve direct feature engineering in contrast to deep learning models which extract features automatically.
In general, the results are inspiring given the fact that the Albanian language is considered a resource-constrained language and it faces many challenges when it comes to natural language processing tasks in general, and in sentiment analysis in particular. These challenges involve both technical and linguistic related aspects. From a technical point of view, systems for sentiment analysis of Albanian text face a scarcity of NLP tools and techniques such as tools for text stemming and lemmatization, list of stop words etc. From a linguistic perspective, there are various aspects which affect the performance of sentiment analysis systems for the Albanian language including negations (explicit and implicit) slang words/acronyms, figurative language (sarcasm, irony), etc.
Conclusions and Future Work
This article presented a sentiment analyser for extracting opinions, thoughts and attitudes of people expressed on social media related to the COVID-19 pandemic. Three deep neural networks utilizing an attention mechanism and a pre-trained embedding model, that is, fastText, are trained and validated on a real-life large-scale dataset collected for this purpose. The dataset consisted of users' comments in the Albanian language posted on NIPHK Facebook page during the period of March to August 2020. Our findings showed that our proposed sentiment analyser performed pretty well, even outperforming the baseline classifier on the collected dataset. Specifically, an F1 score of 72.09% is achieved by integrating a local attention mechanism with BiLSTM. These results are very promising considering the fact that the dataset is composed of social media user-generated reviews which are typically written in an informal manner-without any regard to standards of the Albanian language and also consisting of informal words and phrases like slang words, emoticons, acronyms, etc. The findings validated the usefulness of our proposed approach as an effective solution for handling users' sentiment expressed in social media in low-resource languages.
In future work, we will focus on studying more colloquial textual data on social platforms like Twitter, Instagram, and so forth, and propose deep learning models that can be enriched with semantically rich representations [32] for effectively extracting peoples' opinions and attitudes. Another interesting aspect that will be investigated in the future is using emojis (emoticons) as an input data because they are also an effective way to express people's emotions and attitudes towards a certain event. Furthermore, the collected dataset is highly imbalanced with the neutral class having more than half of the comments, thus future work will concentrate on applying data balancing strategies including synthetic data generation and oversampling techniques, that is, SMOTE, as well as text generation models such as GPT-2.
Author Contributions: Z.K. contributed throughout the article development, including conceptualisation, methodology, formal analysis, writing the original draft and supervision. L.A. and A.K. contributed to the conceptualisation of the idea, investigation, validation and writing reviews, F.K., D.M. and F.G. have been resources and contributed to software development and data curation. All authors have read and agreed to the published version of the manuscript.
Funding: The APC was founded by Open Access Publishing Grant provided by Linnaeus University, Sweden.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-21T13:12:46.398Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "f56b0cbfed28686b65d638aee0448b2199e33d62",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/10/1133/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9073efa5c4190d9562a7cbb6e0797f9d1b910713",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225929836 | pes2o/s2orc | v3-fos-license | The Analysis of Liquidity and Its Effect on Profitability, Sales, and Working Capital Policy in Manufacturing Companies Listed on Indonesia Stock Exchange
Purpose – This study aims to analyze the liquidity through cash conversion cycle management and to examine the effect of liquidity on profitability, sales and working capital policy in manufacturing companies listed on Indonesia Stock Exchange in the period 2014-2018. Profitability is proxied by return on asset, sales (Total Sales) and working capital policy (Short-term debt to assets and current assets to total assets). Design/methodology/approach – The population of the study is manufacturing companies listed on Indonesia Stock Exchange. Purposive sampling method used as the sampling method. There are 21 manufacturing companies as the samples of the study. The type of data used in this study is quantitative and data sources of this study is secondary data. Descriptive statistic and simple linear regression analysis used as the research methodology. Findings– The findings in this research are descriptive analysis shows CCC benchmark of manufacturing companies is 95.0590 days with the highest value at 408.98 days and the lowest at 19.62 days. It is found that CCC has negative and significant effect on return on asset and total sales as well it has positive effect and significant on short-term debt to assets nevertheless CCC has positive but insignificant effect on current assets to total assets. This findings imply that the shorter length of CCC will possibly enhance the profitability and sales as well reduce the dependency on external financing and more productive business activities. Research limitations/implication – This research only analyzes the company’s liquidity condition in terms of cash conversion cycle or dynamic aspects. Another liquidity ration can be used for further research such as current ratio, cash ratio, quick ration and so on. This research also only focuses on manufacturing sector companies. Another sector companies can be included into the further research such as merchandising sector companies and services sector companies. Originality/ Value –The present study contributes to the literature by measuring the liquidity of manufacturing companies through dynamic liquidity perspective (the management of Cash Conversion Cycle) eventhough the most of companies still use classical liquidity perspective through liquidity ratios to evaluate the liquidty. the present research provides the new perspective about liquidity management by considering the time required companies as the assessment of liquidity in manufacturing companies that listed on Indonesia stock exchange.
INTRODUCTION
Manufacturing industry sector is one of biggest constributor for Indonesia's economic growth. It was proven by manufacturing industry still became the biggest contributor to the 2018 National Gross Domestic Product (GDP) which was 19.89% compared to another sectors (www.bps.go.id, 2019) . The high GDP will indicate that the high sales of production goods and high production will require the high fund to meet the operational activities.
According to Panigrahi (2018), working capital plays a role in running and financing operational activities. Therefore companies need to manage working capital effectively and efficiently in order to meet company's operational activities. Effective and efficient working capital management will impact on company's performance. Over working capital will indicate idle money otherwise the shortage working capital will indicate liquidity problems therefore the balance of working capital is something that must be considered.
The purpose of companies in managing their working capital is to achieve high profits. According to Widiyanti & Bakar (2012), high profitability can be achieve by effiectively working capital management. Nevertheless profitability of manufacturing companies experienced the fluctuation in the last 5 years based on the report that processed from Indonesia stock exchange. It showed that the return on asset (ROA) of manufacturing companies in 2019 was 9.98% but in 2015 experienced downward at 7.76% then in 2016 it became 9.59%, in 2017 it decreased at 9.39% and in the following year profitability experienced a fairly high increase at 10.67%. the downward of profitability indicates that inefficient and ineffective working capital management.
In addition profitability, sales is the result of working capital efficiency. The number of sales that gained by company also influenced by working capital efficiency as the research of Motlíček1 & Martinovičová (2014) that state that the size of sales is strongly influenced by working capital. But the phenomenon that occurred in manufacturing companies showed there is slowdon in sales in the third quarter/2019 which is only 1.8%, it is lower than the same period in the previous year that could reach 4. 6% (m.bisnis.com,2019). The sales slowdown indicates that there is inefficiency of working capital management.
Based on report published by Indonesia stock exchange about problematic issuers, of which among 40 companies, 4 manufacturing companies are included thode are PT. Tiga Pilar Sejahtera Tbk, PT. Sentex Tbk, PT. Jakarta Kyoei Steel Works and PT. Asia Pacific Fibers Tbk (www.idx.co.id, 2019). Even one of them is PT. Tiga Pilar Sejahtera Tbk experienced bankruptcy problems, a situation where the company was unable to pay its debts. This indicates that the inefficiency of working capital management in managing its debts. Therefore working capital policy needs taking by the company to avoid the liquidity problems. Thus the efficiency of working capital needs to be done in order to overcome business operational problems.
Therefore based on all phenomena explained that the balance of working capital is an important thing to be considered in an effort to improve company performance.one way that companies can do to maintain the stability and effectiveness of working capital management is through liquidity management (Sugathadasa, 2018). Where the good liquidity is a key that determines the success of company (Uyar, 2009). The liquidity position of a company is materially affected by the level of account receivables, inventories and current liabilities, the effective management of these components is the company's effort to create good liquidity conditions (Majanga, 2015) thus the management of cash conversion cycle (CCC) is the most widely used measures to evaluate the company liquidity management (Sugathadasa, 2018). Cash conversion cycle is time required by companies to convert their cash outflows back to cash inflows, the shorter CCC time interval the better liquidity condition. Nevertheless, some of previous studies have different results there are some studies that state CCC have no relationship on the enhancement of company's performance (Cristian & Raisa, 2017;Khurramshabbir, 2018;Panigrahi, 2018;Zakari & Saidu, 2016). Therefore it is very important for companies to be able to know the policies that must be taken in the continuity of the business of each company. This research is expected to provide an information for company's management in managing working capital through dynamic liquidity perpective and as a refrence to develop and support further research relating similar topic.
Liquidity
Liquidity is the company's ability to recover current liabilities through utilizing current assets (Atmaja, 2003) It means the more liquid current asset that companies have thus they can minimize liquidity risk. Therefore the company should determine the most effective way to manage liquidity so that the company can meet the needs of working capital and ultimately it will have an impact on the profitability that obtained by the company. According to Wilujeng (2013), the most of companies focus on managing liquidity through liquidity ratios namely current ratio and quick ratios, this ratio represents a static liquidity view. But there are also liquidity management that does not use liquidity ratios in an effort to make liquidity decisions. Moss & Stine (1993) and Farris & Hutchison (2014) use dynamic liquidity measurement through CCC to measure the company's liquidity conditions in order to determine the liquidity measurement that the company will take later.
Static Liquidity
Static liquidity is a measurement of company liquidity by using various liquidity ratios such as Current ratio, Quick ratio and Cash ratio (Atmaja, 2003). This is the classic liquidity measurement where the managers make decision based on the interpretation of liquidty ratio value.
Dynamic Liquidity
The Management of liquidity through dynamic liquidity measurement is by using cash conversion cycle. According to Atmaja (2003) Cash Conversion Cycle is the time in days needed to obtain cash from the results of the company's operations that come from collection of accounts receivable plus inventory sales minus debt payments. CCC measures how long the conversion time from cash back to cash. The formula of cash conversion cycle is follow: Source : Stephen A. Ross (2013)
Profitability
Profitability is one of the company's performance measurements that can be measured in ratios to illustrate the company's ability to generate profit through all the capabilities and resources owned by the company such as sales, cash, capital, number of employees and so on (Telly, 2015). There are some profitability ratio such as return on Asset (ROA), Return on Investment (ROI), Return on Equity (ROE), Net Profit Margin (NPM), Operational Profit Margin (OPM) and Gross Profit Margin (GPM) but in this research only return on asset will be used as the proxy of profitability.
Sales
Andayani et al. (2016) explain that sales are the amount charged to customers for the goods sold both in cash and credit. While based on Suzan & Risyana (2018) state that sales are the total goods sold by the company within a certain period, the higher number of sales produced by the company, the more possibility profit will be generated by the company. Based on the statements, it can be concluded that the sales are an activity carried out by the company in order to sell its products both in cash or credit to customers in a certain period in order to gain profits.
Working Capital Policy
Working capital policy in this research will discuss about working capital policy that relate with short-term investment and financing policy. According to Atmaja (2003), working capital policy regarding management of current assets and current liabilities that consists of two basic decisions are the level of investment in current asset and how the company fund the current assets. In working capital investment policy is proxied by current asset/total asset (CA/TA) this ratio will explain how much the composition of current asset investment will be conducted by companies if it is associated with certain cash conversion cycle length. There are several possibilities that occur if the working capital investment policy is associated with the cash conversion cycle. Relaxed investment policy tends to increase inventories and account receivables so that it will extend the inventory and receivable conversion period which will eventually extend the cash conversion cycle. Conversely, restricted investment policy will provide a shorter cash conversion cycle. Meanwhile, working capital financing policy is measured by short-term debt to asset ratio (STDAR). This ratio will explain the utilization of external financing in companies. This research will aim to analyze whether companies will utilize more external financing or less dependency on external financing in order to finance working capital if it associated with certain cash conversion cycle length.
Research Framework
Liquidity is ability to recover short-term obligations through utilizing current assets. The larger amount of current asset can not be a guarantee for companies to cover their current liabilities. Eventhough companies have larger amount of current assets but their assets is difficult to be converted into cash, companies still experience liquidity problems. Cash conversion cycle is a description how liquid current asset that companies have. The shorter cash conversion cycle will indicate that the more liquid current asset because company need short time to convert current asset turn into cash. The shorter cash conversion cycle will provide larger amount of cash inflows and if the companies can not manage it well, it will make company experience over working capital or idle money. Company is supposed to turn the cash inflow back into another favorable and profitable business activities in order to generate higher profit instead of keeping larger amount of cash (Nwude, 2018).
The short CCC also impacts on sales. the shorter CCC will provide larger amount of cash inflows and companies can turn cash back for subsequent production activities as the result of the increasing sales. in addition to profitability and sales, the shorter CCC also will make companies utilize less on external financing because the larger amount of cash inflows will make companies prefer to having internal financing to external financing through debts. The shorter CCC also will impact on the lower value of current asset investment because ofthe more liquid current asset. According to explanation in research framework, the hypothesis of this research are formulated as follows: H1 : CCC has negative and significant effect on profitability H2 : CCC has negative and significant effect on sales H3 : CCC has positive and significant effect on working capital financing policy H4 : CCC has positive and significant effect on working capital investing policy
Research Design
The design on this research is causal research through a quantitative approach. Sekaran & Bougie (2017) explains causal research as a study that try to analyze whether an independent variable influences dependent variable. Quantitative approach is an approach that explains the result of the research statistically (Sekaran & Bougie, 2017). This research uses variables of liquidity, profitability, sales and working capital policy in manufacturing companies listed in Indonesia Stock Exchange in the period of 2014-2018.
Population and Sampling
Population is the group of objects that become a research sources (Siregar, 2013). Population in this research is entire manufacturing companies listed in Indonesia Stock Exchange during 2014-2018. There are 144 manufactruing companies listed in Indonesia stock exchange as the population of this research. The Sample selection in this study used purposive sampling method. Siregar (2013) defines purposive sampling as a method used to retrieve data based on certain criteria that support research purpose. Based on purposive sampling method, there are 21 manufacturing companies are eligible as sample of this research for observation period during 5 years thus there are 105 samples data that used in this research.
Data Analysis Method
Data analysis conducted in this research is using descriptive statistic analysis, simple linear regression analysis, classic assumption test (normality test, heterokedasticity test, and autocorrelation test) and hypothesis testing using t-test to determine the effect of independent variable partially on dependent variables.
Result
This study uses the SPSS program version 25.0 as the data analysis tool. The analysis technique used in this research is descriptive analysis, the classic assumption test, simple linear regression analysis, hypothesis testing with t test and the coefficient of determination described as follows:
Descriptive Statistic
Following is the result of descriptive statistic calculation from SPSS version 25.0. The analysis technique used in this research is descriptive analysis, the classic assumption test, simple linear regression analysis, hypothesis testing with t test and the coefficient of determination described as follows: Cash conversion cycle (CCC) of manufacturing companies has a minimum value of -19.62 days and a maximum value of 408.98 days while the mean value of CCC in the observation sample is 95.0590 days with a standard deviation of 72.18519. the minimum value of CCC in manufacturing companies show negative result at -19.62 days. The negative CCC indicates that cash inflows that companies gain is faster that cash outflows. Negative result indicates time required to convert account receivable and inventories into cash is getting faster, this condition shows that company's asset is more liquid. The smaller of CCC value even the negative value reflect that company effectively and efficiently in managing their working capital and it shows better liquidity condition.
On the other hand the maximum value of CCC in manufacturing companies at 408.98 days. The longer CCC length will indicate ineffectiveness and inefficiency in managing working capital and it shows bad liquidity conditions. Nevertheless the longer CCC length also can be influenced by the type of business operation for example basic and chemical industries are likely require longer time in their production process instead of consumer goods industry. Every industries have their own process thus each companies have different CCC The mean value of CCC in manufacturing companies is 95.0590 days, this value become the standardization of CCC in manufacturing companies. Manufacturing companies that have CCC value exceeds the CCC standard thus it can be suspected that the company has a liquidity problem.
The Effect of Cash Conversion Cycle on Return on Assets
Based on the results of the study indicate that there is a negative and significant effect between the cash conversion cycle on return on assets. In the first hypothesis, the author states that the cash conversion cycle has a negative and significant effect on return on assets. In the processed data using SPSS version 25.0 shows negative and significant result thus the first hypothesis is accepted. In this study, the results of simple linear regression analysis between the cash conversion cycle and return on assets have a negative effect. While the statistical testing through the T test shows that the T-count in the variable of cash conversion cycle is 3,528 higher than T-table = 1.98326 with a significance value of 0.001 smaller than α = 0.05. thus it can be concluded that there is a significant effect between the cash conversion cycle on return on assets.
Based on the results of this study, the value of R square shows 0.636 or 63.6%. This means that 63.6% of the dependent variable, profitability of manufacturing companies listed on the Indonesia Stock Exchange in 2014-2018 which is proxied by return on assets is influenced by the independent variable of cash conversion cycle. While the rest 36.4% is explained by other independent variables besides cash conversion cycle such as current ratio, quick ratio, cash ratio, inventory turnover ratio, receivable turnover ratio, and working capital turnover ratio.
The result of this study is similar with the research of Harsh & Kumar (2017) regarding the relationship between working capital management and company profitability,the result indicates that there is a negative and significant effect between the cash conversion cycle and profitability. The Research conducted by Nwude,Agbo & Lamberts (2018) regarding the effect of the cash conversion cycle on profitability in Nigerian insurance companies shows a negative and significant effect between the cash conversion cycle and profitability.
Based on the research that conducted by Majanga (2015) that the shorter cash conversion cycle time interval shows the company's effectiveness in managing its liquidity. This research explains that the negative effect between the cash conversion cycle and return on assets indicates that the shorter the cash conversion cycle time, the higher the profitability that company gains. A short cash conversion cycle shows that companies can collect cash inflow from business activities faster than operational cash outflows. The faster the company collects cash inflow so they can utilize the cash inflow turnover for the next business operations thus it can increase the company profits. The length of the cash conversion cycle is determined by the management of account receivables, inventories and account payables. Therefore the shorter length of cash conversion cycle means the more liquid current asset to be converted into cash.
The Effect of Cash Conversion Cycle on Sales
Based on the results of the study indicate that there is a negative and significant effect between the cash conversion cycle on sales. In the second hypothesis, the author states that the cash conversion cycle has a negative and significant effect on sales. In the processed data using SPSS version 25.0 shows negative and significant result thus the second hypothesis is accepted.
In this study shows the result of a simple linear regression analysis between the cash conversion cycle on sales have a negative effect. While the statistical testing through the T-test shows that the T-count in the variable of cash conversion cycle is 20,441 higher than T-table = 1.98326 with a significance value of 0,000 less than α = 0.05. thus it can be concluded that there is a significant effect between cash conversion cycle on sales.
Based on the results of the study, the value of R square is 0.867 or 86.7%. It means that 86.7% of the dependent variable, namely sales of manufacturing companies listed on the Indonesia Stock Exchange in the 2014-2018 is influenced by the independent variable of cash conversion cycle. While the rest 13.3% is explained by other independent variables besides cash conversion cycle such as current ratio, quick ratio, cash ratio, inventory turnover ratio, receivable turnover ratio, and working capital turnover ratio.
Based on the research that conducted Wilujeng (2013) , Uyar (2009) and Bhutto et al. (2011) ) that there is a negative and significant effect between CCC and sales. This is explained as the concept of the cash conversion cycle itself where the shorter time interval of cash conversion cycle, the better company's liquidity.
This research explains that one of the way that companies can do to shorten the cash conversion cycle is by accelerating the collection of account receivables and inventory sales. Both of these components are important factors in company sales. The companies can accelerate the sale of inventory by selling their product both in cash and credit. This will certainly affect the amount of account receivables therefore shortening the cash conversion cycle means accelerating inventory sales thus it will increase the sales of the company's products.
The Effect of Cash Conversion Cycle on Short-term Debt to Asset
Based on the results of the study indicate that there is a positive and significant effect between the cash conversion cycle on short-term debt to asset. In the third hypothesis, the author states that the cash conversion cycle has a positive and significant effect on short-term debt to asset. In the processed data using SPSS version 25.0 shows a positive and significant result thus the third hypothesis is accepted.
In this study shows the results of a simple linear regression analysis between the cash conversion cycle and short-term debt to asset have a positive effect. While the statistical testing through the T test shows that the T-count on the variable of Cash Conversion Cycle is 5,016 higher than T-table = 1.98326 with a significance value of 0,000 less than α = 0.05, thus it can be concluded that there is a significant effect between the cash conversion cycle on short-term debt to asset.
Based on the results of study, the value of R square is 0.590 or 59%. It means that 59% of the dependent variable, namely short-term debt to asset in manufacturing companies listed on the Indonesia Stock Exchange in the 2014-2018 is influenced by the independent variable of cash conversion cycle. While the rest 41% is explained by other independent variables besides cash conversion cycle such as current ratio, quick ratio, cash ratio, inventory turnover ratio, receivable turnover ratio, and working capital turnover ratio.
The results of this study is similar with the research of Bhutto et al. (2011) and Wilujeng (2013) which stated that the shorter the cash conversion cycle, the lower the ratio of short-term debt to asset. The positive effect between the cash conversion cycle and the ratio of short-term debt to asset indicates that the working capital financing policy that taken by the company is a conservative financing policy. This is as explained by Bhutto et al. (2011) that the lower STDA ratio shows the more conservative working capital financing policy that implemented by the company Based on the result of this research, it is reasonable considering the shorter cash conversion cycle shows a large proportion of cash inflow thus companies can reduce the use of external financing through debt therefore company can use cash inflow as company internal financing to finance the company's operational activities. This is as explained in the pecking order theory regarding the financing source hierarchy where companies prefer to use internal financing instead of external financing considering external financing is more risky than internal financing. The use of current liabilities is external financing which certainly provides a higher risk for the company such as liquidity risk and debt interest.
The Effect of Cash Conversion Cycle on Current Asset to Total Assets
Based on the result of the study indicate that there is a positive and insignificant effect between the cash conversion cycle on current assets to total assets. In the fourth hypothesis, the author states that the cash conversion cycle has a positive and significant effect on current assets to total assets. However, the processed data using SPSS version 25.0 shows a positive and insignificant result thus fourth hypothesis is rejected.
In this study, the results of simple linear regression analysis between the cash conversion cycle and current assets to total assets have a positive effect. While the statistical testing through the T test shows that the T-count on the variable of Cash Conversion Cycle is 1.318 smaller than T-table = 1.98326 with a significance value of 0.192 higher than α = 0.05. Thus it can be concluded that there is insignificant effect between the cash conversion cycle on current assets to total assets. The result of this study contradicts with the research result of Bhutto et al. (2011) and Wilujeng (2013) which state that there is a positive and significant effect between cash conversion cycle on current assets to total assets. The shorter cash conversion cycle does not always make a company to reduce the proportion of investment value in current assets. This finding is also supported by the research of Khanqah et al. (2012) that state cash conversion cycle has insignificant effect on CATA. This research explains that a short cash conversion cycle indicates the more liquid of company's current assets thus the value of CATA ratio becomes lower because the current assets are more liquid, nevertheless the shorter of cash conversion cycle, the faster the cash turnover is due to the higher cash inflow that the company gains, therefore to avoid the idle money, the company can turn it back and use cash inflow for more productive business activities such as increasing the investment in other current assets or increasing the production.
Therefore, the shorter the cash conversion cycle does not necessarily cause the company to reduce the value of current assets. On the contrary, the faster the cash conversion cycle, the company will use it for other productive and more effective business activities which indirectly will increase the value of current assets through the increasing of short-term investment. This is actually described by the theory of agency where the manager as the agent will turn the cash back to finance the next operation activites in effort to fulfil the business interest and gain more favorable business activities through short-term investment instead of distributing the high cash inflow to stockholder.
Conclusions
Based on the results and discussions of the research about liquidity and its effect on profitability, sales and working capital policies in manufacturing companies listed on the Indonesia Stock Exchange, The conclusions of this study are as follows: Based on t-test, the results of simple linear regression analysis partially show that cash conversion cycle has negative and significant effect on return on asset and total sales thus the first and second hypothesis are accepted.
Cash conversion cycle has a positive and significant effect on short-term debt to asset ratio (STDAR) of manufacturing companies listed on the Indonesia Stock Exchange partially thus the third hypothesis is accepted.
Cash conversion cycle has a positive but insignificant effect on the current assets to total assets (CATA) of manufacturing companies listed on the Indonesia Stock Exchange partially thus the fourth hypothesis is rejected.
Suggestions
Based on the results of data analysis, discussion, conclusion and research limitations there are some suggestions as follows: For manufacturing companies, the results of this study can be considered in terms of decision making regarding the management of company liquidity by concerning to the cash conversion cycle where the company need to shorten the cash conversion cycle period, a short cash conversion cycle indicates the more liquid of company's current assets condition. The way that companies can take to shorten the cash conversion cycle is through accelerating the sale of inventories and collection of account receivables as well current liabilities payments deferral by considering the trade off.
This study only uses the variable of cash conversion cycle as a proxy for liquidity thus in the future, another liquidity variables can be added such as current ratio, quick ratio and cash ratio. This study only explains the liquidity through the cash conversion cycle thus in the future it can compare the effectiveness of static liquidity through liquidity ratios and dynamic liquidity through cash conversion cycle on company performance. The selection of other research objects can also be observed, for example, the research on other types of companies such as merchandising and service companies. | 2020-10-30T11:06:19.015Z | 2020-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "58af7d5c4dded56344a33c72123ac742a1220297",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.unsri.ac.id/index.php/jmbs/article/download/12201/6185",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8eaecd5ab2a9b0bfa3c80742c2e38c48dce303c1",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
13483106 | pes2o/s2orc | v3-fos-license | Lactate racemase is a nickel-dependent enzyme activated by a widespread maturation system
Racemases catalyze the inversion of stereochemistry in biological molecules, giving the organism the ability to use both isomers. Among them, lactate racemase remains unexplored due to its intrinsic instability and lack of molecular characterization. Here we determine the genetic basis of lactate racemization in Lactobacillus plantarum. We show that, unexpectedly, the racemase is a nickel-dependent enzyme with a novel α/β fold. In addition, we decipher the process leading to an active enzyme, which involves the activation of the apo-enzyme by a single nickel-containing maturation protein that requires preactivation by two other accessory proteins. Genomic investigations reveal the wide distribution of the lactate racemase system among prokaryotes, showing the high significance of both lactate enantiomers in carbon metabolism. The even broader distribution of the nickel-based maturation system suggests a function beyond activation of the lactate racemase and possibly linked with other undiscovered nickel-dependent enzymes.
Introduction
Lactic acid (L-and D-isomers) is an important and versatile compound produced by microbial fermentation. It is used in a range of applications in the agro-food, pharmaceutical and chemical sectors where optical purity is of tremendous importance 1 . Lactic acid is also common in numerous ecosystems and is involved in the energy metabolism of many prokaryotic species, as a product of sugar fermentation or as a carbon and electron source to sustain growth 2 . It can even be a component of the bacterial cell wall in order to confer resistance to the vancomycin antibiotic 3,4 . Micro-organisms have the remarkable ability to metabolize both lactic acid isomers via stereospecific lactate dehydrogenases. However, when only one stereospecific lactate dehydrogenase is present, production or utilization of the other isomer may proceed by lactate isomerization involving a specific lactate racemase (Lar) 5 . This activity was first reported in Clostridium beijerinckii (formerly C. butylicum) 6 and, since then, it has been identified in several species including lactobacilli [7][8][9][10] .
Besides lactate racemase, the only known α-hydroxyacid racemase is the mandelate racemase, which is a Mg-dependent enzyme of the enolase superfamily 11 . The majority of racemases are amino acid racemases, which are either pyridoxal 5′-phosphate (PLP) dependent or PLP-independent enzymes 12 . Both mandelate racemase and PLP-independent racemases rely on intramolecular stabilization of a deprotonated reaction intermediate for their catalysis 11,13 Concerning the lactate racemase, another mechanism is probably taking place due to the absence of an electron-withdrawing group on lactate. A few reports have addressed the Lar activity of Lactobacillus sakei, C. beijerinckii and C. acetobutylicum and suggested a hydride transfer mechanism [14][15][16][17] . However, the Lar enzymes were not identified in these species.
In Lactobacillus plantarum, we previously identified a gene cluster, named lar, which is positively regulated by L-lactate and is required for lactate racemization 4 . In this species, we have shown that D-lactate is an essential compound of the cell-wall peptidoglycan and have proposed that the lactate racemase acts as a rescue enzyme to ensure D-lactate production in physiological conditions where its production by the D-lactate dehydrogenase is not sufficient 4 . The identified lar locus responsible of lactate racemization is composed of five genes that are organized in an operon: larA, larB, larC, larD, and larE. Except for the larD gene, encoding a lactic acid transporter shown recently to increase the rate of lactate racemization in vivo 18 , the role of the other Lar proteins in the racemization of lactate could not be defined 4 . No similarity to any protein of known structure or function could be found for LarA and LarC. As for LarB and LarE, although similar proteins were found, the resulting predictions were not very informative: LarE was predicted as an ATP-utilizing enzyme of the PP-loop superfamily, and part of the C-terminal sequence of LarB was found to be similar to N 5 -carboxyaminoimidazole ribonucleotide mutase 4 .
In this study, we report the first molecular and structural characterization of a lactate racemase, a unique racemase that uses nickel as an essential cofactor. By a combination of in vivo and in vitro experiments, we demonstrate that the lactate racemase is activated by a novel maturation system composed of three accessory proteins that are essential for nickel delivery in the active site of the apo-enzyme. Using in silico analyses, we reveal the widespread distribution of this new nickel-based maturation system among prokaryotes suggesting an important role not only for lactate metabolism but potentially for the activation of other Ni-dependent enzymes.
Four proteins and Ni are required for in vivo Lar activity
In order to investigate whether the previously identified larA-E operon of Lactobacillus plantarum 4 is sufficient to confer Lar activity, it was cloned on a multicopy plasmid and expressed in the heterologous host Lactococcus lactis -a lactic acid bacterium with no Lar activity -under the control of a nisin-inducible promoter. Although Lar proteins could easily be detected from cell extracts of nisin-induced cultures after gel separation, no Lar activity could be measured. As the larA-E operon was not sufficient to confer Lar activity, we hypothesized that additional genes were required. To obtain an extended view of potential L. plantarum genes involved in lactate racemization, a transcriptomic approach was used based on the induction of Lar activity by L-lactate but not by DL-lactate 4 . Besides the previously reported larA-E operon 4 , we identified a second operon consisting of four genes (lp_0103 to lp_0100) as positively induced by L-lactate (Supplementary Table 1). This operon is located upstream of the larA-E operon in an opposite orientation (Fig. 1a). The first gene of the operon, named larR, codes for a transcriptional regulator of the Crp-Fnr family while the other three genes, lar(MN)QO, encode a three component ATP-binding cassette (ABC) transporter (Fig. 1a). Intriguingly, this ABC transporter is homologous to high-affinity Ni transporters, which are generally associated with Ni-dependent enzymes although no such enzyme is known or predicted in L. plantarum 19 . We investigated the contribution of this transporter to the Lar activity by marker-less gene inactivation. Deletion of larQO resulted in the same Lar-deficient phenotype as was observed in the larA-E deletion mutant strain (Fig. 1b). Complementation of this phenotype was attempted through supplementation of the culture medium with Co(II), Ni(II), and other divalent metals. Among all tested supplements, only Ni(II) was able to restore Lar activity in the ΔlarQO mutant. This Ni(II)-dependent recovery of Lar activity was dose-dependent and a full recovery of the wild-type activity was achieved with 1.5 mM Ni(II) (Fig. 1b). To validate the importance of nickel as an essential cofactor for Lar activity, the culture medium of the Lc. lactis strain overexpressing the larA-E operon was supplemented with Ni(II). Notably, the presence of nickel resulted in the activation of Lar activity to a level 3-to 4-fold higher than in L. plantarum (Fig. 1c). These results show that the transfer of the Lar activity of L. plantarum in this heterologous host only relies on the expression of the larA-E operon as long as Ni is supplemented to the growth medium.
We then addressed the question of the contribution of individual lar genes to Lar activity. For this, each gene of the larA-E operon was deleted from the overexpression plasmid and the Lar activity was assayed in Lc. lactis. Of the 5 genes of the operon, four were strictly required for Lar activity, while the aquaglyceroporin encoding gene (larD) was dispensable (Fig. 1c). This in vivo investigation revealed that lactate racemization was surprisingly dependent on the presence of 4 proteins and requires the unusual nickel cofactor. This metal was demonstrated as an essential cofactor for only eight enzymes, all belonging to the family of Ni-dependent metalloenzymes 20 .
LarA is the Ni-dependent lactate racemase
To evaluate if a specific Lar protein is catalytically active as lactate racemase, each Lar protein was purified in presence of Ni in the culture medium. For purification needs, each one was individually fused to a StrepII-tag at either its N-or C-terminus and expressed in Lc. lactis using the larA-E expression vector. Compatibility of the inserted StrepII-tags with Lar activity was verified prior to purification (Fig. 2a). Purification of LarA and LarE could readily be achieved from the strain expressing the entire operon, whereas tagged LarB and LarC could only be purified when the corresponding genes were individually subcloned (LarA Lp , LarB, larC, and LarE; Fig. 2b). For crystallographic needs, we also purified a LarA ortholog from a thermophilic bacterium, Thermoanaerobacterium thermosaccharolyticum, which shows 53% sequence identity with LarA Lp and also racemizes lactate when purified from cells expressing an artificial operon containing the larBCE genes from L. plantarum (LarA Tt ; Fig. 2a,b). The identity of each purified Lar protein was confirmed by MALDI-TOF (Matrix-Assisted Laser Desorption/Ionisation-time-of-flight mass spectrometry) analysis (Supplementary Table 2). Upon purification of StrepII-tagged LarC, two distinct proteins were observed with molecular masses around 36 and 55 kDa (Fig. 2b). The small protein corresponded to the protein encoded by the larC1 open reading frame (ORF) alone (LarC1). The protein showing a higher molecular mass was found to be a fusion between the proteins encoded by the larC1 and larC2 ORFs (LarC), most likely resulting from a programmed ribosomal frameshift 21 . In agreement, artificial in-frame fusion between larC1 and larC2 ORFs resulted in the expression of only the full-length LarC protein, which did not affect Lar activity (Fig. 2a,b). Conversely, expression of LarC1 alone resulted in a complete loss of Lar activity (Fig. 2a). Together, these results show that LarC is the functional form of the protein.
Of the four purified L. plantarum proteins, LarA was the only one capable of catalyzing lactate racemization (LarA Lp and LarA Tt ; Table 1 and Supplementary Fig. 1). The Ni content of both purified LarA homologs was assayed by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and visible spectroscopy using a chromogenic chelator (4-(2-pyridylazo)resorcinol (PAR)) 22 . Purified LarA homologs contained ∼0.1 to 0.2 mol Ni mol protein -1 (Table 1). Upon incubation at room temperature, free Ni became progressively available, which was correlated with a progressive loss of activity ( Supplementary Fig. 2a), indicating that Ni was leaking out of LarA. This loss of activity could be delayed, but not prevented, by the addition of NiCl 2 or L-ascorbic acid in the buffer ( Supplementary Fig. 2b). These observations confirm the previously reported protection of Lar activity by reducing agents (L-ascorbic acid, dithioerythritol, and β-mercaptoethanol) 7 and further highlight Ni leakage as an inactivation consequence. The kinetic parameters of both LarA homologs were also determined ( Table 1 and Supplementary Fig. 1). Albeit in the millimolar range, the K m values determined for LarA Lp are still significantly lower than the cytoplasmic lactate concentration in L. plantarum (>200 mM) 23 , and in the range of previously determined values for the lactate racemase of L. sakei 14 . Given the partial Ni loading of purified LarA, its rate constants are more probably underestimated by at least 5fold. Altogether, these results show that LarA is the Ni-dependent lactate racemase and suggest that the other Lar proteins act as accessory proteins for LarA activation. Maturation proteins are similarly reported for the activation of three other nickel-dependent enzymes 24 .
LarA shows a novel α/β fold
In order to identify the lactate racemase catalytic site, the crystal structure of LarA Tt was determined. LarA Tt crystals were obtained after 2 months of crystallization. The structure was solved by multi-wavelength anomalous dispersion and the structure was refined to 1.8 Å ( Table 2). As expected, given the spontaneous Ni leakage from the enzyme, no Ni was present in the crystals. Crystals were soaked with NiCl 2 in presence or absence of lactate but no Ni incorporation was observed. The enzyme crystallized as a dimer with both monomers showing nearly identical structures (Fig. 3a). LarA contains 18 β-strands and 16 α-helices arranged in a novel fold composed of two domains of similar size, connected by two hinges (Fig. 3b and Supplementary Fig. 3 for a stereo view). As the strand order 162345, observed in domain A, was not found in any fold of the SCOP database 25 , we hypothesize that LarA shows a new fold of the α/β class.
Comparison of 148 LarA homologues shows that 16 residues (3 His, 9 Gly, 2 Lys, 1 Asp, and 1 Arg) are fully conserved (Supplementary Fig. 4 and Supplementary Fig.5 for a representative alignment of 10 LarA homologues from remote species). These residues are located at the interface between the two domains, most of them belonging to domain B (Fig. 4). Although the crystals were obtained in the absence of substrate (lactate) or inhibitor, sulfate and ethylene glycol were present during crystallization, and appear to form hydrogen bonds with conserved residues (His 108, His 200, K184, K298, and R75; Fig. 4a,b and Supplementary Fig. 6). The O-C-C-O atom connectivity of ethylene glycol is also found in lactate, suggesting that the molecule occupies the substrate binding site. These observations suggest that the conserved residues at the interface between the two domains constitute the catalytic site.
LarA Ni center is coordinated by His residues
In order to characterize the binding site of Ni in the LarA structure, XAS (X-ray absorption spectroscopy) experiments were conducted on frozen samples of purified LarA Lp and LarA Tt . The two LarA homologs show similar pre-edge XANES (X-Ray Absorption Near Edge Structure) features which consist of 1s to 3d (8332.5 eV) and 1s to 4p (8337.2 eV) transitions (Fig. 5a). These two transitions are consistent with either a four coordinate square planar nickel center or a five coordinate square pyramidal site 26 , the latter being favored due to the well-defined shape of the 1s to 3d transition (Fig. 5a). LarA was also analyzed by EXAFS (Extended X-Ray Absorption Fine Structure). The Fourier transformed spectra (FT) of LarA Tt (Fig. 5b,c) is dominated by an intense feature at 1.7 Å with a smaller peak at 1.2 Å, suggesting the presence of a split first scattering shell. The goodness of fit (%R) from single scattering fits (Supplementary Table 3) improves significantly with splitting of the first scattering shell and addition of contributions from sulfur scattering atoms. These preliminary fits point to a five coordinated nickel center dominated by N/O scatterers (N) and complemented by a single S/Cl scatterer (S). The Fourier transformed spectra also show density between 2.5 Å to 4.0 Å of R-space, consistent with multiple scattering ligands, generally attributed to the presence of His residues in biological samples. Although the EXAFS data forms a complicated fitting picture, fits consisting of multiple (2 to 3) His residues (H) are generally favored and needed to account for the density. Physically meaningful fits were only achieved when the histidine scattering shells were split. The fits converged on three models, two five coordinate fits (N1H2H1S1 and N2H1H1S1) and a six coordinate fit (N2H2H1S1), which were statistically similar (Supplementary Table 3), but the XANES analysis reported above supports the five coordinate fit. The prediction can be further refined when glycerol (G) is taken into consideration, the latter is an additive in the buffer and also features the O-C-C-O atom connectivity found into lactate ( Fig. 5c and Supplementary Table 3). These results indicate that the conserved His residues (H174, H200, or H108), which are located in the supposed catalytic site, and glycerol (or a similar molecule) may coordinate nickel in LarA.
LarC and LarE are Ni-containing proteins
The assembly of nickel metallocenters by maturases usually requires the presence of at least one nickel carrier (e.g. UreE in the urease system) which inserts Ni into the catalytic site 24 . Therefore, Ni was assayed in purified LarB, LarC, and LarE by ICP-AES and visible spectroscopy using PAR (Fig. 6a) 22 . Ni was detected in LarC and LarE, albeit at different levels. Purified LarC displayed the highest Ni content among all Lar proteins with 7 to 10 mol Ni mol protein -1 , considering that purified LarC is a mixture of LarC1 and LarC in a ratio of about 1.5:1 (Fig. 2b). This observation is consistent with the strong overrepresentation of His residues in LarC1 (8.0% vs. 1.7% in average L. plantarum proteins 27 ) and the presence of a His-rich region. As for LarE, 0.8 mol Ni mol protein -1 were found when the protein was expressed in the presence of Ni and LarBC. The amount of Ni decreased to 0.08 mol Ni mol protein -1 when no Ni(II) was present in the culture medium during LarE expression, and went further down to undetectable levels when Ni(II) was present but LarBC were not co-expressed (Fig. 6a). This shows the requirement of LarC and/or LarB for the Ni loading of LarE, the Ni probably being provided by LarC that could act as a Ni carrier/storage protein.
In vitro activation of LarA by a LarBC-activated LarE
To investigate LarA activation by the putative Lar accessory proteins, in vitro experiments were performed. For this purpose, an inactive version of LarA (apoprotein, apo-LarA) was purified from a strain lacking larBCE (A Lp NiΔBCE ; Fig.6b).The ability of purified LarB, LarC, and LarE to activate apo-LarA was then evaluated in various combinations and in the presence of different cofactors known to activate other Ni-dependent enzymes (Supplementary Table 4). Notably, apo-LarA could be readily activated by adding purified LarE, but only if LarE was co-expressed with LarB and LarC and in the presence of Ni(II) (LarE NiBC , Fig. 6b). No additional cofactor was required for this activation or shown to enhance apo-LarA activation by LarE (Supplementary Table 4). These in vitro results demonstrate that Ni-loaded LarE acts as a maturation protein responsible for the activation of apo-LarA, and indicate that LarB and LarC are involved in the activation of LarE prior to apo-LarA activation.
LarA and its maturation system are widespread in prokaryotes
In order to get an overview of the role of Lar proteins in the microbial world, we analyzed the distribution of lar genes from the larA-E operon in prokaryotic genomes. The lar genes encoding putative nickel transport and regulation proteins (cluster larR(MN)QO) were not considered in this in silico analysis since nickel transport may be achieved by a wide variety of transporters 19 and transcriptional regulation of lactate racemization is not prerequisite for this function. BlastP searches were performed against all complete prokaryotic genomes of the NCBI database (1,087 bacterial and archaeal genomes) using the different Lar proteins of L. plantarum WCFS1 as query sequences. This search revealed the presence of at least one homologue of larA, larB, larC, larD, and larE, in 111, 260, 263, 9, and 259 species, respectively (Fig. 7a). The larA gene appears to be present in most bacterial classes and in archaea. The largest number of larA homologues were found in clostridia (26 out of 78 species) and in δproteobacteria (19 out of 39 species), some species bearing up to 4 larA paralogues (Supplementary Tables 5, 6 and 7). This suggests that lactate racemization may not only be useful to lactic acid producers such as lactic acid bacteria, but also to a wide variety of species with different metabolisms, including acetogenic, sulfate reducing, metal reducing, fumarate reducing and butyrate producing bacteria (Supplementary Table 6). These bacterial taxons are indeed documented to utilize lactate as a carbon and/or electron source [28][29][30][31] .
Ninety-two percent of the genomes bearing a larA homologue (102 out of 111 genomes) also contained the genes for the Lar accessory proteins (larBCE), further reinforcing the necessity of these proteins for LarA activation (Fig. 7a). Strikingly, the accessory proteinsencoding genes were also found in 153 species with no larA homologues (Fig. 7a). To get a better view of the relationship between these genes, their clustering within each genome was examined (Fig. 7b). The presence of a complete cluster including larABCDE seems to be restricted to only 6 species, all of them belonging to the lactobacillaceae family (Supplementary Table 6). As LarD was shown to be a lactic acid channel 18 , the expression of the whole cluster is expected to enhance the lactic acid transport, beside the racemization of lactate. Twelve species harbor a larABCE cluster, but the most recurrent cluster only includes larBCE, which appears in 69 species, among which only 23 species also possess a larA homologue (Fig. 7b). Such putative operonic structures suggest that these genes likely participate in a common function which is not necessarily linked with lactate racemization.
Discussion
Nickel is an essential component of eight metalloenzymes involved in energy (e.g. hydrogenases) and nitrogen (urease) metabolism and is used by 80% of the archaea and 60% of the eubacteria 32 . As we showed that the lactate racemase is Ni dependent, the number of Ni enzymes is now brought to nine 20 . In this study, we characterized the lactate racemase, LarA, and determined its 3D structure, which shows a novel multidomain fold of the α/β class. When comparing LarA structure with all known folds using the VAST algorithm 33 , very few similarities with known structure could be identified (Best score of 12.3 with E value of 1.45E-02, supplementary Table 8). Nevertheless, Domain A was found to be weakly similar to the small domain of trimethylamine dehydrogenase, whose function is unknown 34 , and domain B was found to share some similarities with S-adenosyl methionine (SAM)-dependent methyltransferases 35 , although the SAM binding site is neither conserved in the LarA structure nor in its primary sequence ( Fig. 3a and Supplementary Fig. 5). The catalytic site was predicted to be composed of 7 conserved residues (3 His, 2 Lys, 1 Asp, and 1 Arg). XAS analyses suggest that at least two histidines are involved in Ni coordination. The conserved residues His108, His174, His200 are good candidates for this function. A lactate molecule, showing the same O-C-C-O connectivity as glycerol and binding in a bidentate fashion, would also coordinate Ni. Finally, a yet undefined ligand would complete the coordination sphere, forming the predicted five coordinate square pyramidal site.
Although nickel is absolutely required as a central component of the catalytic machinery of Ni-dependent enzymes, it can only be found in trace amounts in the environment. Therefore, sufficient nickel acquisition by these enzyme systems is a consequential process that can also be complicated by the expression of several nickel enzymes in the same organism 24 . Specific nickel-trafficking proteins are necessary to meet the distinct cellular demands for nickel. Those accessory proteins that are responsible for shuttling the nickel are thought to transfer nickel to the enzyme precursors through protein-protein interactions in a complex stepwise process 24 . However, Ni-binding accessory proteins were only identified in three of the eight Ni-dependent enzymes, i.e. urease, Ni-Fe hydrogenase, and carbon monoxide dehydrogenase 24 (illustrated for the urease at Fig. 8a). In this study, we identified three new accessory proteins, LarB, LarC and LarE, which are required for the lactate racemase activity. These accessory proteins participate in the incorporation of Ni in the lactate racemase apoprotein, as their absence leads to an inactive Ni-less enzyme (Fig.6). In addition, this mechanism is flexible and conserved among lactate racemases, as L. plantarum accessory proteins are able to activate the orthologous LarA Tt enzyme ( Table 1).
The incorporation of Ni in the apoprotein usually requires several accessory proteins and the hydrolysis of GTP (Fig. 8a). Yet, only one accessory protein, LarE, is required in the lactate racemase system and ATP or GTP addition had no effect (Supplementary Table 4). As LarE primary sequence shows similarities with ATP-utilizing enzymes of the PP-loop superfamily 4 , it is tempting to propose that the hydrolysis of ATP in AMP is taking place during the activation cascade, but more likely for the activation of LarE itself rather than for the activation of LarA. Furthermore, an excess of Ni can generally overcome the loss of one or several accessory proteins in other nickel-based systems 24 , whereas here Ni supplementation could neither complement the absence of any lactate racemase accessory protein in vivo nor activate LarA apoprotein in vitro ( Fig. 1c and Supplementary Table 4). This suggests that the metallocenter of the lactate racemase contains one or more ligand(s) in addition to Ni. In this case, LarE would serve as a scaffold protein for the synthesis of the Ni-containing metallocenter, which is then transferred into the catalytic site of LarA in one step. The synthesis of this metallocenter on LarE would require LarB and LarC. As LarC purified from L. lactis cells grown in presence of Ni was shown to contain nickel independently of LarE and/or LarB, LarC is probably the Ni carrier of the Lar system (see Fig. 8b for a model). This activation mechanism is completely different from other maturase-activated Ni-dependent enzymes, where the assembly of the metallocenter takes place on the apoprotein and the Ni carrier transfers its Ni directly into the catalytic site (Fig. 8a). Yet, some similarities may be found with the activation mechanism described for [FeFe]-hydrogenases. This mechanism also involves one accessory protein (HydF), able to activate the apoprotein only when purified in presence of two other accessory proteins (HydG and HydE) 36 (Fig. 8c), but these similarities refer only to the overall sequence of the activation cascade.
To conclude, this work reports the first molecular characterization of a lactate racemase which is a novel maturase-activated Ni-dependent enzyme. The requirement for Ni is a novelty among racemases but the absence of an electron-withdrawing group on lactate may explain its use for the catalysis of lactate racemization by a postulated hydride transfer mechanism 17 . This hypothesis is supported by the identification of a similar catalytic mechanism in [NiFe]-hydrogenases 37 . In addition, the proposed assembly of the metallocenter on a single pre-activated maturation protein is novel and has not been described so far for any other maturase-activated Ni-dependent enzyme. Finally, the occurrence of the genes encoding the Lar maturation machinery in many bacterial and archaeal genomes containing or not a lactate racemase-encoding gene shows the broad importance of this novel Ni-based system and also suggests that this machinery might have been recruited for another function probably linked with the activation of one or more Nidependent enzyme(s).
Biological material and growth conditions
Bacterial strains and plasmids used in the present study are listed in Supplementary Table 9.
All plasmid constructions were performed in Escherichia coli DH10B for pUC18Ery derivatives and in Lc. lactis NZ3900 for pNZ8048 derivatives. L. plantarum was grown in MRS (De Man-Rogosa-Sharpe) broth at 28°C without shaking. Lc. lactis was grown in M17 broth supplemented with 0.5% glucose at 28°C at 120 rpm. When appropriate, chloramphenicol and erythromycin were added to the media at 10 μg ml -1 and NiCl 2 at 1 mM concentration.
DNA techniques
General molecular biology techniques were performed according to standard protocols 38 .
Transformation of E. coli 39 , L. plantarum 40 and Lc. Lactis 41 was performed by electrotransformation. PCR amplifications were performed with the Phusion high-fidelity DNA polymerase (Finnzymes, Espoo, Finland). The primers used in this study were purchased from Eurogentec (Seraing, Belgium) and are listed in Supplementary Table 9.
Construction of the ΔlarOQ mutant
The larQO deletion vector pGIR001 was constructed in two steps. Initially, a 1.62-kb fragment located downstream of larO was amplified by PCR with primers LP096A1 and LP099B3, digested with XbaI and KpnI, and cloned into similarly digested pUC18Ery. Subsequently, a 1.22-kb fragment comprising a fragment of larQ and lar(MN) was amplified by PCR with primers LP0102A3 and LP0101B1, digested with SpeI and XbaI, and inserted in the XbaI site of the plasmid obtained in the first step. The correct orientation of the insert was assessed by PCR with the primers LP096A1 and LP0101B1. This plasmid, pGIR001, harbors an in-frame fusion between the 5′ end of larO and a middle fragment of larQ. Since cloning of a DNA fragment containing larR seemed to be toxic to E. coli, the complete deletion of the lar(MN)QO operon could not be achieved. The pGIR001 suicide vector was used to delete larQO through a two-steps homologous recombination process 40 . Deletion was carried out in L. plantarum NCIMB8826 (wild type), generating strain LR0001. The ΔlarQO genotype was confirmed by PCR with primers LP096UP-3 and LP0105B1, located upstream and downstream of the recombination regions, respectively.
Construction of Lc. lactis expression plasmids
Plasmid pGIR100 was constructed by cloning of a DNA fragment comprising the whole larABCDE operon from L. plantarum NCIMB8826, which was amplified by PCR with primers StrepBZ_A2 and StrepB_B2, digested with PciI and SacI, and then ligated in the pNZ8048 plasmid digested with NcoI and SacI. The resulting plasmid was transformed in Lc. lactis.
Plasmids bearing deleted versions of the larA-E operon for expression in Lc. lactis were all derived from pGIR100: pGIR200 (ΔlarA), pGIR300 (ΔlarB), pGIR500 (ΔlarC), pGIR600 (ΔlarD) and pGIR700 (ΔlarE). For each construction, pGIR100 was first methylated with Dam methylase and S-adenosyl methionine (New England Biolabs). PCR amplification was performed in order to obtain a fragment comprising the whole pGIR100 plasmid, deleted of the gene of interest, using primers LarZ-X_A and LarZ-X_B (X stands for the gene to be deleted), digested with ClaI and self-ligated, generating an in-frame deletion of the selected gene. The ligation mixture was digested with DpnI before transformation in Lc. lactis in order to digest the original pGIR100 plasmid used as template. The plasmid sequences were confirmed by sequencing with primers UP_PNZ8048′ and 632SEQA4 to 632SEQA14.
Construction of plasmids for purification of Lar proteins
Plasmids for expression of selected StrepII-tagged Lar proteins together with expression of all other Lar proteins were derived from pGIR100 (containing the entire larA-E operon): pGIR112 (LarA-Strep-tag), pGIR122 (LarB-Strep-tag), pGIR131 (Strep-tag-LarC) and pGIR172 (LarE-Strep-tag). A fragment comprising the whole pGIR100 plasmid was amplified by PCR using primer pairs LarStrep_A/LarStrep_B for pGIR112, BT_A/BT_B for pGIR122, TC_A/TC_B for pGIR131, and LarE_TA/LarE_TB for pGIR172. In every case, one of the two primers contained the sequence encoding a StrepII-tag either in the upstream or the downstream primer (Supplementary Table 9). Amplified DNA fragments were digested with NheI and self-ligated, generating a 30-bp in-frame insertion of a fragment containing the StrepII-tag at the desired position (either 5′ or 3′ of the targeted gene). The ligation mixture was digested with DpnI before transformation in Lc. lactis in order to digest the original pGIR100 plasmid used as template. The sequence of the expression cassettes was verified by sequencing with primers UP_PNZ8048′ and 632SEQA4 to 632SEQA14.
Construction of plasmids for the modification of LarC
The intermediate plasmid pGEM_larABCDE was constructed by subcloning of a DNA fragment comprising the whole larABCDE operon from L. plantarum NCIMB8826, which was amplified by PCR with primers StrepBZ_A2 and StrepB_B2 and ligated into the pGEM®-T Easy (Promega) 42 .
The intermediate plasmids pGEM_larABC1ΔC2DE and pGEM_larABC-fusedDE were derived from the intermediate plasmid pGEM_larABCDE (containing the entire larA-E operon). PCR amplification were performed in order to obtain a fragment comprising the whole pGEM_LarABCDE plasmid, deleted of larC2, and a fragment comprising the whole pGEM_LarABCDE plasmid, with a 1 bp insertion at the end of larC1, using primers LarZ-C2b_A and LarZ-C2b_B and primers pG_LarCC_A and LarCC_B, respectively. The PCR fragments were digested with BstB1 and self-ligated. The ligation mixtures were digested with DpnI before transformation in E. coli DH10B in order to digest the original pGEM_larABCDE plasmid used as template and purified from E. coli DH10B (Dam + ). Plasmid pGIR400 (ΔlarC2) and pGIR150 (LarC-fused) were constructed by cloning of a DNA fragment comprising the whole larABC1ΔC2DE operon or larABC-fusedDE operon, which was obtained by digestion of pGEM_larABC1ΔC2DE or pGEM_larABC-fusedDE with PciI and SacI, and ligated in the pNZ8048 plasmid digested with NcoI and SacI. The resulting plasmids were transformed in Lc. lactis.
Microarray experiments
A culture of L. plantarum TF101 (ΔldhL) 43 was grown to an OD 600 of 0.75 and then divided into three sub-cultures. Pure L-lactate (200 mM final concentration) was added to one of the sub-cultures. An equimolar mixture of D-and L-lactate (100 mM final concentration for each isomer) was added to a second sub-culture. The third subculture was not treated. The three sub-cultures were further incubated for 90 minutes, before harvesting by centrifugation (5,000 × g, 10 min). Cell pellets were stored at -20°C until RNA extraction. Cells were disrupted with four subsequent 40 sec treatments in a Fastprep cell disrupter, interspaced by 1 min on ice (Qbiogene Inc., Illkirch, France) 44 . After disruption, RNA was isolated with a High Pure RNA Isolation Kit, which included 1 h of treatment with DNase I (Roche Diagnostics, Mannheim, Germany) 44 . The RNA quality was assessed using the RNA 6000 Nano Assay in an Agilent 2100 Bioanalyzer (Agilent technologies, Palo Alto, Ca, USA) following the manufacturer's instructions. cDNA synthesis was carried out by the CyScribe Post-Labelling and Purification kit (Amersham Biosciences, Buckinghamshire, UK) following manufacturer's instructions. Hybridization was performed on custom designed L. plantarum WCFS1 11K Agilent oligo microarrays using the Agilent hybridization protocol (version 5.5). These microarrays contained an average of three probes per gene. The hybridization scheme contained the following cDNA comparisons: (a) untreated culture vs. L-lactate-treated culture and (b) untreated culture vs. DL-lactate-treated culture. Slides were scanned with an Agilent Scanner G2565AA and the intensity of the fluorescent images was quantified using Agilent Feature Extraction software (version A.7.5). Data were extracted, corrected for background, and normalized using the LOWESS algorithm in BASE 44 . The normalized transcriptome data have been deposited in the Gene Expression Omnibus (GEO) database under accession code GSE43518. First, significantly regulated probes were selected based on a fold change (Cy5/Cy3 intensities) higher than 4.0 or lower than 0.25. Genes for which more than 50% of the probes were not significantly regulated, were considered as not regulated. For the remaining genes, the fold change of gene expression was calculated as the average of the fold change between significantly regulated probes.
Protein extraction and analysis
Cells from a 50 ml culture of Lc. lactis or L. plantarum were collected by centrifugation at 5,000 × g for 10 min and washed twice with 25 ml of 60 mM Tris-maleate buffer at pH 6.0 (TM buffer). Cells were resuspended in 0.5 ml TM buffer and transferred to a 2 ml microtube containing 0.5 ml of a suspension of 0.17-0.18 mM glass beads (Sartorius Mechatronics, Belgium) in TM buffer. Lysis was performed by running the microtubes 2 times 1 min at 6.5 m s -1 in a FastPrep-24 (MP, Belgium). Microtubes were cooled 5 min on ice between the runs. After lysis, the soluble fraction (referred to as the crude extract) was collected by centrifugation at 13,000 × g for 15 min (4°C). When larger volumes of culture were used, cells were resuspended and lysed in 50 ml Falcon tubes (BD, NJ USA) using the same protocol.
Routine protein content was measured with the Bradford assay 45 . Since the Bradford assay is highly variable from one protein to another, the NanoOrange protein quantification kit (Invitrogen) was used for a lower protein-to-protein variability 46 . The conversion from g l -1 to mol l -1 was calculated with the theoretical molecular weight of the proteins, assuming they were 100% pure. The weight ratio of LarC1/LarC was estimated to be 1.
Protein purification
Affinity chromatography was performed with Gravity flow Strep-Tactin® Superflow® high capacity columns of 1 ml or 5 ml 49 , with the following adaptations. For 1 ml column purification, 1 l of Lc. Lactis culture was washed and cells were lysed as described above in buffer W (100 mM Tris-HCl 150 mM NaCl pH 7.5) instead of TM buffer. The column, equilibrated with 2 ml buffer W, was loaded with up to 10 ml crude extract, washed 6 times with 1 ml buffer W, and eluted 10 times with 0.5 ml buffer E (buffer W + 2.5 mM desthiobiotin). The following adaptations were made for the Lar proteins: LarA, 60 mM Tris-Maleate at pH 6 was used as lysis buffer and the crude extract was equilibrated at pH 7.5 (using 500 mM Tris at pH 10) before loading; LarB, Triton X-100 (0.1% v/v) was added to all buffers; LarE, 300 mM NaCl instead of 150 mM was used in all buffers. For purifications with 5 ml columns, all volumes were increased 5-fold. In order to purify the Lar proteins to homogeneity (for crystallization purposes only), a second step of purification was performed using size exclusion chromatography. A Hiload 26/60 superdex 200 prep grade resin (Amersham Pharmacia Biotech) was used with 50 mM MES at pH 6.0 and 150 mM NaCl as an elution buffer (300 mM NaCl for LarE). Before protein loading, the sample was concentrated using a Centricon Plus-70 centrifugal (30 kDa cut-off) filter unit (Merck Millipore, Germany). After collection of the fractions containing the target protein, as determined by absorbance at 280 nm, samples were concentrated again using an Amicon Ultra-4 (10 kDa cut-off) centrifugal filter (Merck Millipore, Germany). The purified Lar proteins were stored at -80°C in the elution buffer supplemented with glycerol to 20% of the final volume. The Lar activity of LarA was stable for several weeks in this condition.
Lactate racemase activity
The Lar activity was assayed by measurement of the D-to L-lactate or L-to D-lactate conversion. Cell extracts or purified proteins were incubated at the appropriate dilution with 20 mM D-or L-lactate in 60 mM MES buffer pH 6 at 35°C (LarA Lp ) or 50°C (LarA Tt ) during 10 min. A dilution factor of 10 and 50 was used for L. plantarum and Lc. Lactis cell extracts, respectively. The reaction was stopped by incubating the reaction mixture for 10 min at 90°C. The lactate conversion was measured by enzymatic lactate oxidation into pyruvate using a D-lactic acid/L-lactic acid commercial test (R-Biopharm, Germany). The protocol was adapted to 100 μl reaction volumes in 96-well half-area microplates (Greiner, Alphen a/d Rjin, theNetherlands). The NADH absorbance was read at 340 nm with a Varioskan Flash (Thermo Scientific). One unit of lactate racemase (Lar) activity is defined as the amount of enzyme required to convert 1 μmol of lactate in 1 min.
For kinetics measurements, a substrate concentration ranging from 5 to 320 (or 400) mM was used. The K m and k cat were calculated by non-linear regression using the Michaelis-Menten equation. For time-dependent LarA inactivation assays, 25 pmol of purified LarA Lp was incubated at room temperature in a 100 μl solution containing 60 mM MES buffer pH 6 supplemented with NiCl 2 (10 mM) or L-ascorbic acid (10 mM). Samples (10 μl) were removed every 10 min during 80 min for measuring Lar activity with L-lactate as substrate.
Nickel assays
For Ni quantification by ICP-AES, the sample was first mineralized. A solution of 0.5 ml H 2 O 2 and 0.5 ml of HNO 3 (Merck Millipore, Germany) was added to 1 ml of protein sample and the mixture was then heated to dryness on a heating plate. The residues were solubilized with 0.5 ml of HNO 3 and diluted to 10 ml with H 2 O. The elements were measured by ICP-AES on an ICAP 6500 (Thermo Scientific).
For Ni quantification using PAR, the protein sample was denatured 10 min at 90°C prior to incubation at room temperature with 100 μM PAR in 100 mM Tris-HCl buffer at pH 7.5 during 2 min. The absorbance was read from 300 nm to 600 nm by steps of 2 nm to confirm that the visible spectrum corresponds with the expected PAR-Ni spectrum 22 . For Ni quantification, the absorbance was read at 496 nm. To quantify Nickel leakage from LarA by the PAR assay, 1.5 nmol of LarA Lp was used and the absorbance (496 nm) was monitored every minute during 80 min.
In vitro LarA activation assays
The effect of accessory Lar proteins and cofactors on the in vitro activation of LarA NiΔBCE (apo-LarA) by LarE NiBC was assessed by incubating 2 pmol of LarA NiΔBCE at room temperature in a 50 μl solution of 60 mM MES buffer pH 6 supplemented with 20 pmol LarB, larC or LarE and various cofactors (NiCl 2 , adenosine triphosphate, guanosine triphosphate, cysteine, S-adenosyl methionine, potassium hydrogen carbonate, coenzyme A, nicotinamide adenine dinucleotide, and thiamine diphosphate), when required. Lar activity was measured by sampling 10 μl of the solution at 0, 30, 60 and 120 min. For assessing the activation potential of the different forms of LarE (LarE NiBC , LarE BC or LarE NiΔBC ), 1.4 pmol of LarA NiΔBCE was mixed with 280 pmol of LarE.
Crystallization and structure determination of LarA
For initial screening experiments, a LarA Tt solution at a concentration of 16 mg ml -1 0.5 mM MES at pH 6.0 and 1,5 mM NaCl was submitted to the high-throughput crystallization facility at EMBL Hamburg 50 using the sitting drop vapour diffusion setup. After optimization of the best results, two different crystal forms were obtained in hanging drop experiments. The first form was obtained at 18°C from a reservoir containing 22% polyethylene glycol monomethyl ether 5000, 0.2 M ammonium sulfate, 0.1 M MES pH 6.5, 0.2 M sodium malonate pH 7.0 and 0.02% (w/v) sodium azide. The hanging drop was formed by mixing 2 μl of the protein solution with 2 μl of the reservoir solution. These crystals that appeared after a few days were trigonal, of the space group P3 1 21 or P3 2 21, with a = 227 Å and c = 48 Å, but they diffracted very poorly (about 4 Å resolution) with very diffuse spots and they could not be used for crystal structure analysis. A second crystal form was obtained in very similar conditions: the only difference was the replacement of sodium malonate with 3% (v/v) ethylene glycol in the reservoir solution. These crystals appeared after about two months, they belong to the orthorhombic system and diffracted to much higher resolution and were used for crystal structure analysis.
All data were collected at ESRF on beamline BM30A. A native data set was collected after soaking a crystal for a few seconds in a solution similar to the mother liquor but containing 20% (v/v) ethylene glycol as a cryoprotectant and flash cooled at 100K. A mercury derivative was obtained by soaking a crystal for 48 h in the same solution containing also 1 mM HgCH 3 Cl and was used for MAD data collection at three wavelengths. This mercury derivative was not isomorphous with the native data. All the data sets were processed and scaled using the XDS program package 51 .
The structure was solved by the MAD method applied to the mercury derivative using the Auto-Rickshaw procedure 52 . A substructure containing 4 heavy atoms was successfully solved by SHELXD 53 . The initial phases and molecular model were obtained from SHELXE 54 and they were further improved in the Auto-Rickshaw pipeline using PHASER 55 , MLPHARE, PIRATE, REFMAC5 56 from the CCP4 suite 57 , RESOLVE 58 and ARP_wARP 59 . A total of 226 residues in three fragments were finally docked in the sequence. Surprisingly, all these residues belonged to the same protein chain. This model was used for molecular replacement in the native data set and PHASER 55 succeeded to locate two copies in the asymmetric unit. Many cycles of model building in ARP_wARP 59 allowed to build large parts of the two chains. The model was manually completed using COOT 60 and refined with REFMAC5 56 . Electron density did not appear for residues [30][31][32][33][34][35][36] and for about ten residues in the C-terminal part of the two chains. The final model contains 588 water molecules, 5 sulfate ions, 1 ethylene glycol molecule and 1 supposed Mg 2+ ion located on the non-crystallographic two-fold axis linking the two protein chains. Atomic coordinates for the LarA of T. thermosaccharolyticum have been deposited in the RSCB Protein Data Bank (PDB) database under accession code 2YJG.
EXAFS and XANES analyses
Protein samples from Lc. lactis. LarA Tt (4.5 mM, 0.1 mol Ni mol protein -1 ) and LarA Lp (1.2 mM, 0.17 mol Ni mol protein -1 ) were prepared in 10 mM Tris-HCl pH 7.5, 20% glycerol buffers for XAS. Samples were kept at -80°C and transported at liquid nitrogen temperatures until run. X-ray absorption data collection was carried out at SSRL (Stanford Synchrotron Radiation Lightsource, 3 GeV ring) beam line 7-3 equipped with a 13-element Ge detector array with a Si(220) phi=0 0 double crystal monochromator, and a liquid helium cryostat for the sample chamber. Söller slits were used to reduce scattering and a 3 μm Z-1 element filter was placed between the sample and the detector. Internal energy calibration was performed by collecting spectra simultaneously in transition mode on a nickel metal foil.
Data averaging and energy calibration was performed using SixPack 61 . The first inflection points from the XANES spectral regions were set to 8331.6 eV for nickel foil. The AUTOBK algorithm available in the Athena software package was employed for data reduction and normalization 62 . A linear pre-edge function followed by a quadratic polynomial for the post-edge was used for background subtraction followed by normalization of the edge-jump to 1. EXAFS data was extracted using an R bkg of 1, and a spline from k = 1 to 14 Å -1 with no clamps. The k 3 -weighted data were fit in R-space over the k = 2 -12.5 Å -1 region with E 0 for nickel set to 8340 eV. All data sets were processed using a Kaiser-Bessel window with a dk = 2 (window sill). Artemis employing the FEFF6 and IFEFFIT algorithms was used to generate and fit scattering paths to data [62][63][64] . Single scatter and multiple scatter fits were performed as described below. Average values and bond lengths obtained from crystallographic data were used to construct initial fitting models for multiple scatter analysis 65 . The paths from a particular multiple scattering model were generally afforded two degrees of freedom and were fit in terms of the distance from the first ligand atom-metal bond and a ligand specific sigma square component of the Debye-Waller factor [66][67][68] . To assess the goodness of fit from different fitting models, the goodness of fit (%R), χ 2 , and reduced χ 2 (χ ν 2 ) were minimized. Increasing the number of adjustable parameters is generally expected to improve the %R; however χ ν 2 may go through a minimum then increase indicating the model is over-fitting the data.
Bioinformatic analyses of Lar proteins
To identify conserved residues in LarA proteins, 148 LarA homologues were aligned with clustalX2 69 , and a phylogenetic tree was constructed using the Neighbour-Joining method 70 . Out of these, 10 homologues selected to represent the diversity of LarA proteins were aligned with clustalX2 69 .
To study the distribution and clustering of Lar proteins among bacterial and archaeal genomes, BlastP searches were performed using L. plantarum Lar protein sequences of strain WCFS1 (a single colony isolate of NCIMB8826) as queries against all complete prokaryotic genomes of the NCBI database (release 187). BlastP searches were performed using default parameters with a cut-off E-value of 10 -5 in order to only select proteins which show a high similarity with Lar proteins. For LarA homologues, proteins were excluded when their length were below 90% or higher than 130% compared to the length of LarA Lp . When several genomes of the same species were available, the genome containing the highest content in lar genes was retained. Gene clusters based on gene identification numbers were assessed by considering all lar genes as belonging to the same cluster when there was a maximum of one gene between them on the chromosome. Since over 500 homologues were found for LarD due to the high similarity level of members of the aquaglyceroporin family, only 9 homologs present in lar operons were retained as true LarD. Since some larC genes contain a frameshift, this led to the misleading annotation of some larC homologs as pseudogenes that were overlooked by the BlastP program. Some of them were identified manually by looking at the flanking regions of lar genes, but others may still remain undetected. The « ± » represents the 95% confidence interval (normal distribution, n=4). Curves are shown in Supplementary Fig. 1 | 2016-05-04T20:20:58.661Z | 2014-04-07T00:00:00.000 | {
"year": 2014,
"sha1": "85059c8e0a743b81e515cc81ba1056a5039046d4",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/ncomms4615.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "85059c8e0a743b81e515cc81ba1056a5039046d4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
271071857 | pes2o/s2orc | v3-fos-license | Indigo Carmine Binding to Cu(II) in Aqueous Solution and Solid State: Full Structural Characterization Using NMR, FTIR and UV/Vis Spectroscopies and DFT Calculations
The food industry uses indigo carmine (IC) extensively as a blue colorant to make processed food for young children and the general population more attractive. Given that IC can act as a ligand, this raises concerns about its interactions with essential metal ions in the human body. In view of this interest, in the present investigation, the copper(II)/indigo carmine system was thoroughly investigated in aqueous solution and in the solid state, and the detailed structural characterization of the complexes formed between copper(II) and the ligand was performed using spectroscopic methods, complemented with DFT and TD-DFT calculations. NMR and UV/Vis absorption spectroscopy studies of the ligand in the presence of copper(II) show changes that clearly reveal strong complexation. The results point to the formation of complexes of 1:1, 1:2 and 2:1 Cu(II)/IC stoichiometry in aqueous solution, favored in the pH range 6–10 and stable over time. DFT calculations indicate that the coordination of the ligand to the metal occurs through the adjacent carbonyl and amine groups and that the 1:1 and the 2:1 complexes have distorted tetrahedral metal centers, while the 1:2 structure is five-coordinate with a square pyramidal geometry. FTIR results, together with EDS data and DFT calculations, established that the complex obtained in the solid state likely consists of a polymeric arrangement involving repetition of the 1:2 complex unit. These results are relevant in the context of the study of the toxicity of IC and provide crucial data for future studies of its physiological effects. Although the general population does not normally exceed the maximum recommended daily intake, young children are highly exposed to products containing IC and can easily exceed the recommended dose. It is, therefore, extremely important to understand the interactions between the dye and the various metal ions present in the human body, copper(II) being one of the most relevant due to its essential nature and, as shown in this article, the high stability of the complexes it forms with IC at physiological pH.
Introduction
Indigo carmine, the sodium salt of 5,5 ′ -indigodisulfonic acid (Scheme 1, IC 2− ), is a water-soluble derivative of indigo obtained by introducing sulfonate groups into positions 5 and 5 ′ of the phenyl rings of indigo.Its structure preserves the indigo chromophore, responsible for the characteristic blue color of these two dyes.IC shows great chemical versatility, receiving practical uses in many different areas, such as in analytical chemistry as a redox indicator [1,2], in medicine as a diagnostic tool, in the food industry as a coloring additive (E132) [3], in the pharmaceutical industry as a colorant for drugs and food supplements and in the textile industry to color blue jeans and other blue denim [4].
essential roles in several biological functions and metabolic routes and therefore must be kept in the human body within restricted limits.Interactions between food dyes and metal ions can alter their bioavailability or produce complexes or other species with high toxicity.Therefore, thoroughly studying the interaction of food dyes with transition metal ions is very important in the context of the evaluation of their toxicity.In recent years, the complexation of IC with various metal ions, such as copper(II), zinc(II), nickel(II), cobalt(II) or iron(II), has been investigated.However, a comparison of the literature reveals that, for some of these metal ions, there is no consensus regarding the number or stoichiometry of the complexes formed.Additionally, these studies lack Scheme 1. Dianionic indigo carmine (IC 2− ) and numbering scheme used in the discussion of results.
The IC dye has been considered a highly toxic indigoid [5].It has been reported that contact with human skin can cause irritation and contact with eyes can cause permanent damage to the cornea and conjunctiva.In 1978, it was reported that ingestion of this synthetic dye could even be fatal, and it was indicated that it would lead to reproductive, neurological, developmental and acute toxicity [5].When administered intravenously for medical procedures, such as in diagnostic methods and surgeries, it has been shown to be related to severe hypertension and cardiovascular and respiratory effects in patients.However, the exact causes of these side effects were not identified, and IC has been considered to be a useful and safe diagnostic tool [6,7].The reported isolated cases of adverse effects to the use of IC were attributed to individual patient predispositions, rather than dye toxicity at the applied doses [8].In 2023, the European Food Safety Authority (EFSA) issued an updated scientific opinion on the toxicity of IC [9].The panel on Food Additives and Nutrient sources (ANS) confirmed the previously recommended acceptable daily intake of 5 mg/kg body weight per day for IC (E132) and noted that an intake below this level does not raise concerns regarding genotoxicity or subacute, chronic, reproductive or developmental toxicity.It was also noted that no population group exceeds the maximum recommended values of exposure.However, the EFSA panel also noted that infants, toddlers and children may be exposed to levels of IC above the acceptable daily intake if they consume more of the food products containing IC than the average population.This is in fact a real risk since IC is frequently used to make food products for young children, such as chewing gum, cereal, frozen desserts and toppings, more attractive.
While food dyes can potentially interfere with many biological processes and metabolites [10][11][12][13], a major concern considering the consumption of food dyes is their interaction with metal ions.Although synthetic dyes are considered safe for food purposes, many of them have the structural characteristics necessary for complexation with heavy metals.Many of these elements, such as iron, zinc, copper, nickel, etc., play essential roles in several biological functions and metabolic routes and therefore must be kept in the human body within restricted limits.Interactions between food dyes and metal ions can alter their bioavailability or produce complexes or other species with high toxicity.Therefore, thoroughly studying the interaction of food dyes with transition metal ions is very important in the context of the evaluation of their toxicity.
In recent years, the complexation of IC with various metal ions, such as copper(II), zinc(II), nickel(II), cobalt(II) or iron(II), has been investigated.However, a comparison of the literature reveals that, for some of these metal ions, there is no consensus regarding the number or stoichiometry of the complexes formed.Additionally, these studies lack detailed information on the structures of the complexes, which is essential for a deep understanding of the mechanisms of toxicity of IC.A twenty-year-old study by Salas-Peregrin and coworkers explored the interaction between copper(II) nitrate and IC, by using infrared spectroscopy, elemental analysis, magnetic measurements and thermal techniques.A solid salt, characterized as a 1:1 (metal:ligand) complex, was isolated from aqueous solution, but no information was provided regarding the sites of interaction between the metal and the ligand [14].In a more recent spectrophotometric study by Zanoni et al. [15], the complexation between IC and several metal ions in aqueous solution was investigated.The authors proposed the formation of stable complexes between the IC ligand and the Cu(II), Ni(II), Co(II) and Zn(II) ions at pH 10, suggesting a 2:1 (metal:ligand) stoichiometry for all the complexes.It was concluded that the complexation with Zn(II) was significantly weaker, while the complex with Cu(II) appeared to be the most stable [15].The interaction of IC with the chloride salts of Fe(II), Ni(II) and Cu(II) in aqueous solution was also studied by Haleim and coworkers [16] using UV/Vis absorption spectroscopy.The authors suggested the formation of complexes with 1:2 (metal:ligand) stoichiometry with Fe(II) at pH 9.4, Ni(II) at pH 7.2 and Cu(II) at pH 5.2.An additional complex with copper was identified, with 1:1 (metal:ligand) stoichiometry, in the pH range 6.8-8.8.The different results obtained by Zanoni [15] and Haleim [16] regarding pH conditions, number of complexes and their stoichiometries for Cu(II) and Ni(II) may be due to differences in the reagents used.In the study by Haleim et al., the metals were used in the form of chloride salts, while the study by Zanoni et al. did not disclose what form the metals were in.In a more recent (2017) study [17], Tavallali and coworkers proposed that the complexation of IC with Cu(II) occurs in a 2:1 stoichiometry (metal:ligand), in the pH range 7.5-10.That study used UV/Vis absorption spectroscopy and presented conclusions about the structure of the complexes and coordination sites, suggesting that the coordination of the metal to the ligand occurs through the N-H and C=O groups [17].However, contrary to the older study mentioned above, the study by Tavallali et al. did not use H 2 O as a solvent but rather a mixture of solvents, H 2 O/DMSO (4:1 v/v).The use of different solvents can promote the formation of different species.
The poor agreement between the different reports on IC complexation with metal ions in water presented hitherto and the lack of structural details make a more complete and systematic characterization of these systems necessary.The reported higher stability of the copper(II) complexes compared to those formed by IC with other metal ions under physiological pH conditions motivated our interest in investigating this system using a variety of techniques, in order to achieve a comprehensive characterization of the Cu(II)/IC system in aqueous solution and in the solid state.NMR, infrared and UV/Vis spectroscopic studies were carried out to determine the number, stoichiometries and coordination sites of the complexes in this system.These experimental studies were complemented with Density Functional Theory (DFT) investigations (including time-dependent DFT calculations) that provided additional structural details.
Structure and Energetics of the Indigo Carmine Molecule
The indigo carmine molecule is composed of two 5-sulfo-3-indolinone fragments bound through a C=C double bond between the carbon atoms at positions 2 and 2 ′ .Taking into account that the sulfonate substituents introduced at positions 5 and 5 ′ of the benzene rings ionize at a very low pH and that the pK a of the amine groups is very high (pK a > 11) [18,19], we can conclude that the IC 2− anion (Scheme 1) is the dominant form of indigo carmine in the pH region considered in this work.Two isomers can be expected for the IC molecule: the trans and cis isomers, along with several tautomers resulting from the transfer of hydrogen atoms between the carbonyl and amine groups.In this study, to characterize the structure of the IC 2− ligand in aqueous solution, the structures of its possible isomers and tautomers were optimized using the DFT B3LYP/6-311++G(d,p) level of theory, taking into account the bulk solvent effects of water.Figure 1 and Table 1 present, respectively, the optimized structures of the most stable forms of IC2 − in water and their relative Gibbs energies, symmetries and equilibrium populations at 298.15 K.
The equilibrium populations were estimated from the calculated relative Gibbs energies using the Boltzmann equation.Figure S1 and Table S1 in the supplementary material provide data on the additional higher-energy tautomers. of theory, taking into account the bulk solvent effects of water.Figure 1 and Table 1 present, respectively, the optimized structures of the most stable forms of IC2 − in water and their relative Gibbs energies, symmetries and equilibrium populations at 298.15 K.The equilibrium populations were estimated from the calculated relative Gibbs energies using the Boltzmann equation.Figure S1 and Table S1 in the supplementary material provide data on the additional higher-energy tautomers.
The obtained results allow us to conclude that the strongly dominant form of IC 2− in solution, with a population of ~100% at 298.15 K, is the trans structure, with Ci point group symmetry.In this isomer, the indigo chromophore has intramolecular hydrogen bonds between the amine and carbonyl groups, which stabilize this form relatively to the cis form (in which these hydrogen bonds are absent) by 37 kJ/mol.According to the performed DFT calculations, the longest C-C bonds in the structure of the trans isomer are the C7-C8 and the C1-C7 bonds, with bond lengths of 1.494 and 1.464 Å, respectively.The remaining C-C bonds vary from 1.361 to 1.412 Å.This structural feature appears to be a consequence of the charge delocalization that occurs with the establishment of the intramolecular interactions between the amine and the neighboring carbonyl groups and the formation of the 6-atom pseudo-rings they originate.Trans indigo carmine presents a planar skeleton, while the optimized structure of the cis isomer has a substantial twist between the planes of the rings around the C=C central double bond (C7-C8=C19-C16 dihedral angle of 11.4°).The energy barrier for the trans → cis conformational isomerization was estimated as being 84 kJ/mol, according to the performed relaxed potential energy profile calculated at the DFT B3LYP/6-311++G(d,p) theoretical level (Figure S2).Therefore, this process is not expected to occur at room temperature without an external source of energy.The remaining tautomers of indigo carmine have relative energies considerably higher than The obtained results allow us to conclude that the strongly dominant form of IC 2− in solution, with a population of ~100% at 298.15 K, is the trans structure, with C i point group symmetry.In this isomer, the indigo chromophore has intramolecular hydrogen bonds between the amine and carbonyl groups, which stabilize this form relatively to the cis form (in which these hydrogen bonds are absent) by 37 kJ/mol.According to the performed DFT calculations, the longest C-C bonds in the structure of the trans isomer are the C7-C8 and the C1-C7 bonds, with bond lengths of 1.494 and 1.464 Å, respectively.The remaining C-C bonds vary from 1.361 to 1.412 Å.This structural feature appears to be a consequence of the charge delocalization that occurs with the establishment of the intramolecular interactions between the amine and the neighboring carbonyl groups and the formation of the 6-atom pseudo-rings they originate.Trans indigo carmine presents a planar skeleton, while the optimized structure of the cis isomer has a substantial twist between the planes of the rings around the C=C central double bond (C7-C8=C19-C16 dihedral angle of 11.4 • ).The energy barrier for the trans → cis conformational isomerization was estimated as being 84 kJ/mol, according to the performed relaxed potential energy profile calculated at the DFT B3LYP/6-311++G(d,p) theoretical level (Figure S2).Therefore, this process is not expected to occur at room temperature without an external source of energy.The remaining tautomers of indigo carmine have relative energies considerably higher than that of the trans form, so they can be concluded to have negligible populations (~0%) in the relevant experimental conditions.The trans isomer will be, therefore, the only species considered in this study for the DFT calculation of the NMR, IR and UV/Vis spectra of the free ligand, which are discussed in the following sections and support the interpretation of the experimental results.
According to the performed literature search, no X-ray structure of indigo carmine has yet been published.Therefore, we present in Table 2 selected conformationally relevant structural parameters of the DFT optimized structure of the trans form.
Cu(II) Complexes of Indigo Carmine
In a preliminary step, before considering complexation with Cu 2+ ions, the infrared spectrum of the IC ligand in its commercial crystalline form was obtained and analyzed based on the DFT calculated spectrum simulated of the IC 2− anion.Additionally, aqueous solutions of IC were prepared at different pH values and were left at room temperature until complete evaporation of the solvent.The blue powder obtained after evaporation of the solvent of each sample was then also analyzed by ATR-FTIR.From the analysis of these spectra, it was possible to conclude that in the pH region between 4 and 10, the ATR-FTIR spectrum of IC remains unaltered.However, the spectra of the prepared powder samples present differences relative to the spectrum of the crystalline indigo carmine obtained commercially.Figure 2 shows the ATR-FTIR spectra of the IC samples obtained from solutions at pH 4 and pH 10, along with the spectrum of the crystalline IC sample.These are compared with the IR spectrum calculated for the IC 2− anion at the B3LYP/6-311++G(d,p) level of theory, taking into account the bulk solvent effects of water.The vibrational frequencies of the theoretical spectrum were multiplied by an optimized scaling factor (0.984) which was determined by a linear fit between selected experimental vibrational frequencies of the crystalline IC and the corresponding theoretical frequencies (Figure S3).
Comparing the spectra obtained for the powder and the crystalline IC samples, it is possible to observe the expected significant broadening of the bands in the spectra of the powder samples.Furthermore, some deviations in the frequencies of the bands are also observed, namely the band due to the carbonyl stretching vibration, which appears at 1659 cm −1 in the crystalline IC spectrum, appears shifted to a lower frequency in the spectra of the powder samples (1635 cm −1 ).Differential Scanning Calorimetry (DSC) was used to confirm that there was no water present in the solid IC powder samples, and therefore, the observed changes were attributed to a loss of the crystalline arrangement of the IC molecules and formation of an amorphous phase.The solid samples of the Cu(II) complexes used in the ATR-FTIR studies were prepared following the same procedure presented for the ligand, and therefore, the experimental spectrum of the IC amorphous phase will be considered for comparison with the samples of complexes.
Published studies have reported complexation between IC and copper(II) nitrate [14,17] or copper(II) chloride [17].It is possible that the use of different salts may be one of the reasons for the lack of agreement between the structures proposed for the complexes in those studies, since different copper salts can promote the formation of different complexes.To analyze this effect, we compared the complexes obtained with both salts in the solid state.Figure 3 shows the ATR-FTIR spectra of the solid samples obtained from aqueous solutions of Cu(NO 3 ) 2 •2.5H 2 O, CuCl 2 •2H 2 O, IC and solutions of the complex(es) formed between these salts and IC.Published studies have reported complexation between IC and c [14,17] or copper(II) chloride [17].It is possible that the use of different of the reasons for the lack of agreement between the structures p complexes in those studies, since different copper salts can promote different complexes.To analyze this effect, we compared the complexe both salts in the solid state.Figure 3 shows the ATR-FTIR spectra of th obtained from aqueous solutions of Cu(NO3)2•2.5H2O,CuCl2•2H2O, IC an complex(es) formed between these salts and IC.It can be noted that the spectra of the metal:ligand samples suggest, for both salts, formation of the same type of complex(es), since the new bands detected in comparison to the ligand spectrum are the same in both spectra.It seems that, at least for the complex(es) formed in the solid state, both the nitrate and chloride metal salts originate the same type of complexes.The two spectra of the metal:ligand samples are very similar, except for the region between 1250 and 1500 cm −1 , in which the Cu(NO 3 ) 2 :IC spectrum presents a relatively wide band that impairs its analysis.This very broad band appears to have contributions of vibrations from the nitrate groups present in the sample (originating from the reactant Cu(NO 3 ) 2 •2.5H 2 O), since a very intense absorption band is observed at 1343 cm −1 in the spectrum of this salt.For this reason, the CuCl 2 •2H 2 O salt was chosen to carry out the subsequent experiments.It should be noted that at these high pH values (8-10), it is probable that the spectra of the salts show additional bands due to Cu(II) hydroxide.
The spectrum of the Cu(II):IC sample obtained from a 0.010:0.010mol dm −3 solution at pH 9.95 (Figure 4, middle) presents significant differences compared to the spectra of the reagents, indicating that complex formation has occurred.This is revealed by the appearance of new bands at 1685 (band 1), 1522 (band 3) and 1288 cm −1 (band 5) and the increase in the intensity of the bands at 1572 (band 2) and 1321 cm −1 (band 4) relative to the spectra of the reagents.The region of the spectrum between 3700 and 2700 cm −1 is dominated by an intense band due to the symmetric and anti-symmetric stretching vibrations of the H 2 O molecules possibly coordinated in the complexes, together with N-H and C-H stretching vibrations (Figure S4).
complexes in those studies, since different copper salts can promote t different complexes.To analyze this effect, we compared the complexe both salts in the solid state.Figure 3 shows the ATR-FTIR spectra of th obtained from aqueous solutions of Cu(NO3)2•2.5H2O,CuCl2•2H2O, IC and complex(es) formed between these salts and IC.In addition to the conditions already mentioned (Figures 3 and 4), in the study of the Cu(II):IC system by ATR-FTIR, the molar ratios 1:1, 1:2 and 2:1 (M:L) were also considered, with solutions at pH 4, 6, 7, 8 and 10.It was found that for all the conditions analyzed, the same complex was isolated in the solid state since, despite some slight differences in the relative intensities of the bands, all spectra present a similar profile, irrespective of the sample (Figure S5).For the various molar ratios, 1:1, 1:2 and 2:1, pH 8-10 appears to be the most favorable for complexation, since under these conditions, the intensity of the new absorption bands is maximum.Comparing the relative intensities of the bands in the spectra obtained for the samples with various molar ratios and the same pH (pH 8, Figure S5), the presence of a larger relative amount of the complex in the sample with a molar ratio of 1:2 (M:L), in which it is expected that a 1:2 stoichiometry complex is favored, can be concluded.This may suggest that this is the stoichiometry of the complex formed between Cu(II) and IC under the experimental conditions used in the present study.
A previously published [17] study on the complexation behavior of IC with Cu(II) reports FTIR data on this system.The IC+Cu(II) spectrum presented in that work is, however, very different from the ones we obtained for our samples; namely, the new bands 1 and 3, attributed in our work to the Cu(II):IC complex, are not present in the spectrum shown in that publication.However, in that previous study, a mixture of H 2 O:DMSO 4:1 was used as a solvent, so it is plausible to admit that the authors may have observed a Cu(II):IC complex different from the one we obtained in our work.In fact, in that publication, the complex found was characterized as a 2:1 (metal:ligand) species, contrary to our results which seem to point to a 1:2 complex.To obtain additional information on the stoichiometry and determine the structure of the Cu(II):IC complex found in the present work, we carried out additional experiments.Specifically, we studied the complexation between IC and Cu(II) in aqueous solution by UV/Vis absorption spectroscopy.This study was carried out at pH 6.2, since a preliminary study (Figure S6) showed that in solutions with a large excess of metal (10:1), there is formation of Cu(II) hydroxide above pH 7. UV/Vis absorption spectra were then obtained for an aqueous solution of IC with increasing concentrations of Cu(II) (metal:ligand molar ratio in the range 1:2 to 10:1) at pH 6.2 (Figure 5a).
have contributions of vibrations from the nitrate groups present in the sample (originating from the reactant Cu(NO3)2•2.5H2O), since a very intense absorption band is observed at 1343 cm −1 in the spectrum of this salt.For this reason, the CuCl2•2H2O salt was chosen to carry out the subsequent experiments.It should be noted that at these high pH values (8)(9)(10), it is probable that the spectra of the salts show additional bands due to Cu(II) hydroxide.
The spectrum of the Cu(II):IC sample obtained from a 0.010:0.010mol dm −3 solution at pH 9.95 (Figure 4, middle) presents significant differences compared to the spectra of the reagents, indicating that complex formation has occurred.This is revealed by the appearance of new bands at 1685 (band 1), 1522 (band 3) and 1288 cm −1 (band 5) and the increase in the intensity of the bands at 1572 (band 2) and 1321 cm −1 (band 4) relative to the spectra of the reagents.The region of the spectrum between 3700 and 2700 cm −1 is dominated by an intense band due to the symmetric and anti-symmetric stretching vibrations of the H2O molecules possibly coordinated in the complexes, together with N-H and C-H stretching vibrations (Figure S4).In addition to the conditions already mentioned (Figures 3 and 4), in the study of the Cu(II):IC system by ATR-FTIR, the molar ratios 1:1, 1:2 and 2:1 (M:L) were also considered, with solutions at pH 4, 6, 7, 8 and 10.It was found that for all the conditions analyzed, the same complex was isolated in the solid state since, despite some slight differences in the relative intensities of the bands, all spectra present a similar profile, irrespective of the sample (Figure S5).For the various molar ratios, 1:1, 1:2 and 2:1, pH 8-10 appears to be the most favorable for complexation, since under these conditions, the intensity of the new absorption bands is maximum.Comparing the relative intensities of the bands in the spectra obtained for the samples with various molar ratios and the same pH (pH 8, Figure S5), the presence of a larger relative amount of the complex in the sample with a molar ratio of 1:2 (M:L), in which it is expected that a 1:2 stoichiometry complex is favored, can be concluded.This may suggest that this is the stoichiometry of the complex formed between Cu(II) and IC under the experimental conditions used in the present study.A previously published [17] study on the complexation behavior of IC with Cu(II) reports FTIR data on this system.The IC+Cu(II) spectrum presented in that work is, however, very different from the ones we obtained for our samples; namely, the new bands 1 and 3, attributed in our work to the Cu(II):IC complex, are not present in the spectrum shown in that publication.However, in that previous study, a mixture of H2O:DMSO 4:1 was used as a solvent, so it is plausible to admit that the authors may have observed a Cu(II):IC complex different from the one we obtained in our work.In fact, in that publication, the complex found was characterized as a 2:1 (metal:ligand) species, contrary to our results which seem to point to a 1:2 complex.To obtain additional information on the stoichiometry and determine the structure of the Cu(II):IC complex found in the present work, we carried out additional experiments.Specifically, we studied the complexation between IC and Cu(II) in aqueous solution by UV/Vis absorption spectroscopy.This study was carried out at pH 6.2, since a preliminary study (Figure S6) showed that in solutions with a large excess of metal (10:1), there is formation of Cu(II) hydroxide above pH 7. UV/Vis absorption spectra were then obtained for an aqueous solution of IC with increasing concentrations of Cu(II) (metal:ligand molar ratio in the range 1:2 to 10:1) at pH 6.2 (Figure 5a). Figure 5a,b clearly show that, as the concentration of Cu(II) increases, there is a gradual decrease in the intensity of the free IC absorption band (610 nm), with a simultaneous increase in the absorption intensity at 707 nm, which corresponds to the absorption maximum of the new species formed as IC is consumed.There is no well- Figure 5a,b clearly show that, as the concentration of Cu(II) increases, there is a gradual decrease in the intensity of the free IC absorption band (610 nm), with a simultaneous increase in the absorption intensity at 707 nm, which corresponds to the absorption maximum of the new species formed as IC is consumed.There is no well-defined isosbestic point, so it is possible that competition between several equilibria for the formation of different Cu(II)/IC complexes is occurring.
In an attempt to determine the stoichiometry of the complex(es) formed, Job's method was used.This method has, however, certain limitations that Job pointed out in his early work [20], such as that it would only be applicable when there is a single complex in solution.Later studies have shown that the concentration of the A a B b complex is still a maximum in the stoichiometric a/b ratio if the concentrations of the competing complexes are much lower for this ratio [21,22].The Job's plot (Figure 6) shows two absorption maxima close to molar fractions 0.4 and 0.7 (the spectra were repeated multiple times with different solutions to rule out errors), suggesting that there will be more than one complex in solution, making it impossible to draw conclusions about their exact stoichiometries due to the method's limitations.Nonetheless, these maxima suggest that one of the two complexes will have a higher proportion of the ligand, close to 1:2 (maximum at 0.4), and the other complex will have a higher proportion of the metal, close to 2:1 (maximum at 0.7).This result is very relevant, since it reveals the formation of more than one complex in agreement with the previous finding of a poorly defined isosbestic point (Figure 5a).his early work [20], such as that it would only be applicable when there is a single complex in solution.Later studies have shown that the concentration of the AaBb complex is still a maximum in the stoichiometric a/b ratio if the concentrations of the competing complexes are much lower for this ratio [21,22].The Job's plot (Figure 6) shows two absorption maxima close to molar fractions 0.4 and 0.7 (the spectra were repeated multiple times with different solutions to rule out errors), suggesting that there will be more than one complex in solution, making it impossible to draw conclusions about their exact stoichiometries due to the method's limitations.Nonetheless, these maxima suggest that one of the two complexes will have a higher proportion of the ligand, close to 1:2 (maximum at 0.4), and the other complex will have a higher proportion of the metal, close to 2:1 (maximum at 0.7).This result is very relevant, since it reveals the formation of more than one complex in agreement with the previous finding of a poorly defined isosbestic point (Figure 5a).To further investigate the Cu(II):IC system in aqueous solution, 1 H NMR spectra were obtained for IC solutions in the absence and in the presence of the Cu 2+ metal ion (Figures 7 and S7), for different molar ratios and pH values and with two different copper(II) salts, Cu(NO3)2 and CuCl2 (the full assignment of the ligand 13 C and 1 H spectra is also shown in Figures S8-S10).The spectra reveal almost the same complexation pattern with both salts for the same pH and molar ratio values (cf. Figure 7d,e).Despite the complexity of the system and the broad signals observed, by comparing the intensities of the signals in the spectra of the various solutions, it was possible to identify the major complexes, a, b and c, and establish that they are most probably species with a (metal:ligand) stoichiometry of 1:2, 2:1 and 1:1, respectively.
Due to the rapid nuclear relaxation induced by the paramagnetic properties of the Cu 2+ (3d 9 ) metal ion, the 1 H NMR signals of the complexed ligands show very broad signals and/or shifts to high frequencies when compared to the signals of the free ligand, making it difficult to unequivocally assign some of the signals.The most probable assignment of the 1 H NMR signals of the complexes is shown in Table 3.
The shifts resulting from the binding of the ligand to paramagnetic metals can result from the direct delocalization of the spin density (contact shift) or from the dipolar interactions in space (pseudo-contact shift) of the unpaired electrons of the metal.The Cu 2+ (3d 9 ) metal ion in the complexes has contributions that reflect both the contact shift and To further investigate the Cu(II):IC system in aqueous solution, 1 H NMR spectra were obtained for IC solutions in the absence and in the presence of the Cu 2+ metal ion (Figure 7 and Figure S7), for different molar ratios and pH values and with two different copper(II) salts, Cu(NO 3 ) 2 and CuCl 2 (the full assignment of the ligand 13 C and 1 H spectra is also shown in Figures S8-S10).The spectra reveal almost the same complexation pattern with both salts for the same pH and molar ratio values (cf. Figure 7d,e).Despite the complexity of the system and the broad signals observed, by comparing the intensities of the signals in the spectra of the various solutions, it was possible to identify the major complexes, a, b and c, and establish that they are most probably species with a (metal:ligand) stoichiometry of 1:2, 2:1 and 1:1, respectively.
Due to the rapid nuclear relaxation induced by the paramagnetic properties of the Cu 2+ (3d 9 ) metal ion, the 1 H NMR signals of the complexed ligands show very broad signals and/or shifts to high frequencies when compared to the signals of the free ligand, making it difficult to unequivocally assign some of the signals.The most probable assignment of the 1 H NMR signals of the complexes is shown in Table 3. [23] and other mononuclear complexes of the Cu(II) metal ion [24,25].On the other hand, binuclear copper(II) complexes exhibit narrower 1 H NMR signals, usually with signal widths two orders of magnitude smaller than mononuclear analogs.However, the broadening of the signals in mono-and binuclear complexes generally increases with an increase in the concentration of free metal in the solution, as shown in Figure S7.The broad signals observed for complexes a and c support the hypothesis that the complexes should be mononuclear species of 1:1 and 1:2 (metal:ligand) stoichiometries, CuIC and Cu(IC)2, respectively, while species b should correspond to a binuclear species of 2:1 (metal:ligand) stoichiometry, Cu2IC.Using the structural information obtained from the FTIR, UV/Vis and NMR experiments as input information, DFT calculations were carried out with the aim of characterizing the structures of the complexes in greater detail.To determine their lowestenergy most probable geometries, the various possible isomeric structures with 1:1, 1:2 and 2:1 Cu(II):IC stoichiometries were considered.It is expected that coordination of IC to the metal ion occurs through the carbonyl O and the deprotonated N atoms since, although the pKa value of IC is relatively high (pKa > 11) [18,19], deprotonation can occur at lower pH values in the presence of metals that can form stable complexes with IC (an The shifts resulting from the binding of the ligand to paramagnetic metals can result from the direct delocalization of the spin density (contact shift) or from the dipolar interactions in space (pseudo-contact shift) of the unpaired electrons of the metal.The Cu 2+ (3d 9 ) metal ion in the complexes has contributions that reflect both the contact shift and the pseudo-contact shift, resulting in the observation of very broad 1 H NMR signals, in particular for mononuclear complexes.These results align with the comparable effects observed in earlier studies on the paramagnetic 8-hydroxyquinoline complex of Cr(III) [23] and other mononuclear complexes of the Cu(II) metal ion [24,25].On the other hand, binuclear copper(II) complexes exhibit narrower 1 H NMR signals, usually with signal widths two orders of magnitude smaller than mononuclear analogs.However, the broadening of the signals in mono-and binuclear complexes generally increases with an increase in the concentration of free metal in the solution, as shown in Figure S7.The broad signals observed for complexes a and c support the hypothesis that the complexes should be mononuclear species of 1:1 and 1:2 (metal:ligand) stoichiometries, CuIC and Cu(IC) 2 , respectively, while species b should correspond to a binuclear species of 2:1 (metal:ligand) stoichiometry, Cu 2 IC.
Using the structural information obtained from the FTIR, UV/Vis and NMR experiments as input information, DFT calculations were carried out with the aim of character-izing the structures of the complexes in greater detail.To determine their lowest-energy most probable geometries, the various possible isomeric structures with 1:1, 1:2 and 2:1 Cu(II):IC stoichiometries were considered.It is expected that coordination of IC to the metal ion occurs through the carbonyl O and the deprotonated N atoms since, although the pK a value of IC is relatively high (pK a > 11) [18,19], deprotonation can occur at lower pH values in the presence of metals that can form stable complexes with IC (an assumption corroborated by the evidence of increase in the extent of complexation with increasing pH).Taking into account that the usual coordination numbers for Cu(II) complexes with Nand O-donor ligands in aqueous solution are six, five and four (in fact, two of the relevant aqua species of Cu(II) are the [Cu(OH 2 ) 6 ] 2+ and [Cu(OH 2 ) 4 ] 2+ ions), six-coordinated metal centers were considered for all the input structures, by including coordinated H 2 O molecules in the remaining positions of the first coordination sphere of Cu(II).The optimization of the structures was carried out at the B3LYP/LanL2DZ/6-311++G(d,p) theoretical level, taking into account the bulk solvent effects of water.For all the structures, the optimization procedure converged to geometries in which one or two H 2 O molecules were expelled from the first coordination sphere and remained hydrogen bonded to different positions in the IC ligand.In a subsequent step, these H 2 O molecules were removed, and the structures were reoptimized.The optimized structures are shown in Figure 8 and reveal that Cu(II) in these complexes shows preference for metal centers with coordination numbers 4 and 5 (in the 2:1 singlet structure shown in Figure 8, one H-bonded water molecule was left in each center to allow comparison of energy with the triplet isomer).
assumption corroborated by the evidence of increase in the extent of complexation with increasing pH).Taking into account that the usual coordination numbers for Cu(II) complexes with N-and O-donor ligands in aqueous solution are six, five and four (in fact, two of the relevant aqua species of Cu(II) are the [Cu(OH2)6] 2+ and [Cu(OH2)4] 2+ ions), sixcoordinated metal centers were considered for all the input structures, by including coordinated H2O molecules in the remaining positions of the first coordination sphere of Cu(II).The optimization of the structures was carried out at the B3LYP/LanL2DZ/6-311++G(d,p) theoretical level, taking into account the bulk solvent effects of water.For all the structures, the optimization procedure converged to geometries in which one or two H2O molecules were expelled from the first coordination sphere and remained hydrogen bonded to different positions in the IC ligand.In a subsequent step, these H2O molecules were removed, and the structures were reoptimized.The optimized structures are shown in Figure 8 and reveal that Cu(II) in these complexes shows preference for metal centers with coordination numbers 4 and 5 (in the 2:1 singlet structure shown in Figure 8, one Hbonded water molecule was left in each center to allow comparison of energy with the triplet isomer).
The 1:1 complex has a metal center with coordination number 4 in a distorted tetrahedral arrangement, global charge −1 and C1 point group symmetry.The 1:2 structure has global charge −4, C1 point group symmetry and coordination number 5 in a metal center with a square pyramidal geometry.It is not unexpected to find different coordination numbers and geometries for Cu(II) complexes.Indeed, due to the nonspherical symmetry of the d 9 configuration and the Jahn-Teller effect on six-coordinate geometries, the coordination sphere of Cu(II) ions is characterized by its flexibility, with non-rigid geometries (fluxional behavior) that include regular octahedral, elongated tetragonal octahedral, square planar, tetrahedral, trigonal bipyramidal and square pyramidal metal centers [26][27][28].The 2:1 stoichiometry structure has two Cu(II) ions, and therefore, two possibilities for the global spin of the complex were considered, namely the singlet state, assuming that pairing of the unpaired electrons of the two metal ions (of 3d 9 configuration) occurs, and the triplet state, assuming that in the structure, there are two unpaired electrons (that The 1:1 complex has a metal center with coordination number 4 in a distorted tetrahedral arrangement, global charge −1 and C 1 point group symmetry.The 1:2 structure has global charge −4, C 1 point group symmetry and coordination number 5 in a metal center with a square pyramidal geometry.It is not unexpected to find different coordination numbers and geometries for Cu(II) complexes.Indeed, due to the non-spherical symmetry of the d 9 configuration and the Jahn-Teller effect on six-coordinate geometries, the coordination sphere of Cu(II) ions is characterized by its flexibility, with non-rigid geometries (fluxional behavior) that include regular octahedral, elongated tetragonal octahedral, square planar, tetrahedral, trigonal bipyramidal and square pyramidal metal centers [26][27][28].
The 2:1 stoichiometry structure has two Cu(II) ions, and therefore, two possibilities for the global spin of the complex were considered, namely the singlet state, assuming that pairing of the unpaired electrons of the two metal ions (of 3d 9 configuration) occurs, and the triplet state, assuming that in the structure, there are two unpaired electrons (that is, assuming ferromagnetic coupling between the metal ions).Both structures were optimized and converged to slightly different geometries.The 2:1 singlet structure converged to a geometry with coordination number 4, in which two water molecules were expelled from the first coordination sphere of each metal center, and has C i point group symmetry.The 2:1 triplet structure converged to a geometry with coordination number 5, square pyramidal metal centers and C i point group symmetry.The 2:1 singlet configuration was reoptimized using the five-coordinated triplet structure as a starting geometry, and again the fifth water molecule was expelled from each metal center.Comparison of the energies of the singlet and triplet isomeric 2:1 structures reveals that the singlet is more stable than the triplet by 61.8 kJ/mol, and therefore, these results suggest the singlet configuration for the 2:1 structure.This result is in complete agreement with the narrow NMR signals observed for complex b (Cu 2 IC), in contrast with the broader NMR signals observed for the mononuclear complexes a and c.These results also show that antiferromagnetic coupling occurs between the two metal centers of the 2:1 complex observed in aqueous solution.Through-ligand antiferromagnetic coupling has also been observed for other Cu(II) dinuclear complexes with long intermetallic distances (e.g., distances of 5.7 [29], 11.3 [30] and 12.3 [31] angstroms).According to our DFT calculations, the distance between the two metal ions in the 2:1 Cu(II)/IC complex is 6.24 angstroms, which is comparable to those examples in the literature.From a fundamental perspective, this is also an interesting system since it allowed us to investigate through-ligand exchange magnetic coupling by using NMR spectroscopy coupled with DFT calculations.
Once plausible structures for the complexes have been determined, one is now ready to analyze the obtained UV/Vis absorption and FTIR results in more detail.The UV/Vis absorption spectra were simulated by TD-DFT for the 1:1, 1:2 and 2:1 singlet structures presented in Figure 8 and for the free ligand (Figure 9b).TD-DFT calculations on molecules that have an open shell ground state can generate excited states with unphysically large amounts of spin contamination.This difficulty can be partially overcome by only considering excited states that preserve <S 2 > within appropriate limits, neglecting states for which the difference between <S 2 > and its exact value goes beyond the acceptable error [32].The spectrum simulated for the 1:1 structure will not be analyzed since for this structure, almost all the calculated excited states showed a high degree of spin contamination under the calculation conditions used.
One can observe that the spectrum calculated for the 1:2 structure correctly reproduces the shift observed in the experimental spectrum of the Cu(II)/IC system at pH 6.2 in the region between 350 and 900 nm, in agreement with the NMR experimental results at pH 6 which indicate an equilibrium dominated by complex a (1:2 Cu:IC), with a significant presence of complex c (1:1).We should note that the relative intensities of the theoretical bands do not necessarily correlate with the experimental spectra, as these intensities depend on the experimental relative concentrations of IC and the complex.The spectrum calculated for the 2:1 structure presents an additional absorption band, at shorter wavelengths (ca.454 nm), which is not observed experimentally, and therefore, this complex shall not be present in significant amounts at pH 6, also in accordance with the NMR results.The band observed at 710 nm for the 1:2 complex can be assigned based on the TD-DFT calculations to the contributions of two bands predicted at 592 and 615 nm, which involve transitions with π-π* character.The calculated vertical excitation energies, oscillator strengths, wavelengths and major contributions to the excited states of indigo carmine and its 1:2 and 2:1 complexes with Cu(II) are summarized in Table S2, Figures S11 and S12.
overcome by only considering excited states that preserve <S 2 > within appropriate lim neglecting states for which the difference between <S 2 > and its exact value goes beyo the acceptable error [32].The spectrum simulated for the 1:1 structure will not be analyz since for this structure, almost all the calculated excited states showed a high degree spin contamination under the calculation conditions used.One can observe that the spectrum calculated for the 1:2 structure correc reproduces the shift observed in the experimental spectrum of the Cu(II)/IC system at p EPR spectra were obtained for three different Cu(II):IC solutions, with varying pH and metal:ligand molar ratios.The spectra for solutions at pH 8 and molar ratios of 1:2 (2.5:5 mmol/dm −3 ) and 1:1 (10:10 mmol/dm −3 ) provided interesting results, suggesting the presence of two paramagnetic species (Figure S13).Considering that the NMR data obtained for similar pH and molar ratios (Figure 7c-e) indicate the coexistence of three Cu(II):IC complexes under these conditions, these results are in complete agreement with our proposal that two of these complexes are paramagnetic while one is diamagnetic.EPR spectra were also obtained for a 2.5:5 mmol/dm −3 solution at pH 6; however, in these conditions, the spectrum does not show a reasonable signal/noise ratio.The distribution of copper into multiple complexed species, along with the presence of free metal, probably contributed to the difficulty in obtaining a good spectrum.
The theoretical infrared spectra were also calculated for the same structures in order to structurally characterize the species observed in the solid state (Figure S14).Comparing the theoretical spectra with the experimental one, it can be seen that the spectra of the 2:1 (both singlet and triplet) structures fail to predict the new bands attributed to the complex, so these two structures do not seem to correspond to the observed complex.On the other hand, the spectra calculated for the 1:1 (complex c) and 1:2 (complex a) structures reproduce the experimental spectrum much better, and the agreement is almost perfect for the spectrum calculated for the 1:2 structure, which correctly predicts not only the vibrational frequencies of new bands of the complex but also their relative intensities (Figure 10).
As indicated previously, the most significant changes observed in the experimental spectrum of the solid Cu(II):IC sample relative to the experimental IC spectrum are the presence of five new bands, at 1685, 1572, 1522, 1321 and 1288 cm −1 .In perfect agreement with this, these bands are absent in the theoretical IC spectrum and are predicted for the 1:2 complex at 1686, 1572, 1516/1515, 1318 and 1285 cm −1 , respectively.The good agreement between the calculated spectrum for the 1:2 structure and the experimental spectrum obtained for the solid sample of Cu(II):IC allows us to propose the 1:2 structure in Figure 8 as having the essential structural features of the complex formed in the solid state.Based on this good agreement, we can assign the bands observed for the complex with a high degree of confidence.The new bands 1 and 3 are attributed to the stretching vibrational mode of the carbonyl groups.Band 1, predicted at 1686 cm −1 , corresponds to the stretching vibration of the C=O bond of the free carbonyl groups, coupled with the C=C stretching of the double bond at position 2, and band 3, predicted at 1516/1515 cm −1 , corresponds to the C=O stretching vibration of the coordinated carbonyl groups, conjugated with C=C stretching and N-H in-plane angular deformations.These results are particularly relevant as they confirm the coordination of only one carbonyl group of each ligand molecule to the metal.The coordination of the carbonyl O atom involves donation of electrons from the ligand to the metal, making the C=O bond weaker and therefore changing the stretching vibrational frequency of the coordinated C=O groups to a lower frequency relative to the uncoordinated C=O group.Band 2, predicted at 1572 cm −1 , is due to C-C stretching vibrations in the phenyl rings.Band 4, predicted at 1318 cm −1 , involves mainly C-C stretching vibrations of the phenyl rings, but it also has some contribution of C=O stretching of the coordinated carbonyl groups.Finally, band 5, predicted at 1285 cm −1 , is due to N-C stretching coupled with in-plane C-H angular deformation.The EDS results (Table 4) indicate, however, that the actual structure of the complex formed in the solid state has a higher percentage of copper than the one predicted for the 1:2 structure.Bearing in mind that in the solid state, a rearrangement of the 1:2 structure may occur, such that polymeric structures based on the 1:2 repeating unit may form, we carried out additional DFT calculations for the smallest oligomer of this type, a structure with 2:3 Cu(II):IC stoichiometry, to predict its stability and vibrational spectrum.Such a structure was found to be indeed stable, and its vibrational spectrum was also found to correctly reproduce the experimental infrared spectrum of the solid sample (Figures S16 and S17).For this structure, the agreement between the experimental and the expected copper percentage is better and will improve for longer oligomers.Therefore, these results suggest that the solid-state structure of the complex will likely consist of a polymeric arrangement involving repetition of the 1:2 complex unit.
EPR spectra were obtained for three different Cu(II):IC solutions, w and metal:ligand molar ratios.The spectra for solutions at pH 8 and m (2.5:5 mmol/dm −3 ) and 1:1 (10:10 mmol/dm −3 ) provided interesting result presence of two paramagnetic species (Figure S13).Considering that obtained for similar pH and molar ratios (Figure 7c-e) indicate the coe Cu(II):IC complexes under these conditions, these results are in complete our proposal that two of these complexes are paramagnetic while one is d spectra were also obtained for a 2.5:5 mmol/dm −3 solution at pH 6; ho conditions, the spectrum does not show a reasonable signal/noise ratio. of copper into multiple complexed species, along with the presence of free contributed to the difficulty in obtaining a good spectrum.
The theoretical infrared spectra were also calculated for the same str to structurally characterize the species observed in the solid state (Figure the theoretical spectra with the experimental one, it can be seen that the (both singlet and triplet) structures fail to predict the new bands attributed so these two structures do not seem to correspond to the observed comp hand, the spectra calculated for the 1:1 (complex c) and 1:2 (compl reproduce the experimental spectrum much better, and the agreement for the spectrum calculated for the 1:2 structure, which correctly pred vibrational frequencies of new bands of the complex but also their re (Figure 10).a Trace amounts of Al (0.08%), Si (0.48%), Cl (0.26%) and K (<0.01%) were also found.
Materials and Preparation of Samples
Commercially available disodium 2-(3-hydroxy-5-sulfonato-1H-indol-2-yl)-3-oxoindole-5-sulfonate (indigo carmine, IC, Merck, Darmstadt, Germany), copper(II) nitrate hemi(pentahydrate) (Sigma-Aldrich, St. Louis, MO, USA) and copper(II) chloride dihydrate (Merck) were used as received.For the UV/Vis, EPR and ATR-FTIR experiments, solutions were prepared in Milli-Q H 2 O, and the pH was adjusted by the addition of HClO 4 and NaOH solutions.The samples were kept in the dark until used.Powder samples for the ATR-FTIR experiments were prepared by evaporating the solvent at room temperature from the solution samples.The sample for the EDS analysis was obtained by filtration of the precipitate formed in a 10:10 mmol dm −3 Cu(II):IC solution after being kept at 5 0 C for a week.After filtration, the precipitate was dried in a desiccator.The solutions for the NMR experiments were prepared in a 70%:30% mixture of H 2 O/D 2 O, and the pH was adjusted by adding DCl and NaOD; the reported pH* values are the direct readings from the pH-meter at room temperature, after standardization with aqueous (H 2 O) buffers. 1
Instrumentation
The ATR-FTIR spectra were acquired in a Thermoscientific Fourier Transform Infrared Spectrometer-Nicolet iS5 iD7 ATR (Thermo Fisher Scientific, Waltham, MA, USA) (resolution 1 cm −1 ), using the OMNIC 8 program for spectra collection and analysis.The animation visualization module of the GaussView 6.0 program was used to facilitate the assignment of the vibrational modes.A Shimadzu spectrometer UV-2100 (Shimadzu, Tokyo, Japan) was used to obtain the UV/Vis absorption spectra, and the 1 H and 13 C NMR spectra were acquired in a Bruker Avance 500 NMR spectrometer.The HSQC and HMBC (2D NMR) spectra were recorded on the same spectrometer.In the NMR spectra, the methyl signal of tert-butyl alcohol was used as an internal reference (δ 1.2 for 1 H and δ 31.2 for 13 C) relative to TMS.The 13 C spectra were obtained using Waltz-16 proton decoupling and taking advantage of the nuclear Overhauser effect (as a result, signal intensities for 13 C spectra are not quantitative).The X-ray microanalysis (EDS) was conducted with the Bruker QUANTAX system which includes the Bruker Nano XFlash ® detector (Bruker, Billerica, MA, USA).The Bruker Nano XFlash ® detector is an energy-dispersive X-ray detector that works according to the principle of the silicon drift detector, with a 133 eV energy resolution (Mn Ka) @ 100 kcps.The detector has an effective area of 10 mm 2 and is cooled by a Peltier element.The elements in the range B (Z = 5) to Am (Z = 95) can be identified and quantified.The software module uses a standardless PB-ZAF method for quantification.This system is installed in the TESCAN VEGA 3 SBH-Easy Probe SEM (TESCAN, Brno, Czechia).For the data analysis, the ESPRIT 1.9 Software was used.The EPR spectra (X-band, 0.34 T, 9.5 GHz) were recorded on a continuous-wave Bruker EMX spectrometer.Typical instru-ment settings were microwave power 0.6 mW, modulation amplitude 16.4 G, modulation frequency 100 kHz, sweepwidth 600.0 G and 256 scans for each spectrum.The solution samples were placed in a 3 mm EPR tube for the measurements.
Computational Details
The geometries of the conformers and tautomers of IC and its Cu(II) complexes were optimized by DFT, using the B3LYP [33] functional, the LanL2DZ [34,35] pseudopotential and the associated valence basis sets for Cu and the 6-311++G(d,p) [36] basis set for the remaining atoms.Vibrational frequency analysis was carried out to assess the nature of the stationary points; that is, the absence of imaginary frequencies allowed confirmation that these are minima on the potential energy surface.The potential energy profile for the cis-trans conformational interconversion in the most stable tautomer of indigo carmine was obtained by scanning the conformationally relevant dihedral angle (C7-C8=C19-C16) with optimization of the remaining structural parameters.
The calculated harmonic vibrational frequencies were scaled with scaling factors designed to correct for limitations introduced by the incomplete basis set, the partial treatment of electronic correlation and vibrational anharmonicity [37,38].These scaling factors were determined by linear fits between selected experimental vibrational frequencies of the studied samples and the corresponding theoretical frequencies.The theoretical infrared spectra depicted in the Figures were simulated by employing Lorentzian functions with a full width at half maximum (FWHM) of either 6 or 4 cm −1 , centered at the scaled calculated vibrational frequencies.
The optimization of the structures was carried out accounting for the bulk solvent effects of water through the application of the IEFPCM (integral equation formalism variant of the polarizable continuum model) [39,40].
The UV/Vis absorption spectra of both the ligand and complexes were computed using the TD-DFT (Time-Dependent Density Functional Theory) method, using the longrange corrected hybrid CAM-B3LYP [41] functional and the basis sets indicated above.This functional, integrating Hartree-Fock exchange interaction in varying degrees depending on interelectronic distance, has demonstrated its ability to yield vertical excitation energies close to experimental values across a diverse range of molecules [42,43].The DFT and TD-DFT calculations were conducted with the Gaussian 16 [44] code, while visualization of structures and molecular orbitals was enabled by the GaussView 6.0 software.
Conclusions
A full speciation study and structural characterization of the Cu(II):IC complexes was carried out in aqueous solution and in the solid state by using FTIR, UV/Vis absorption and NMR spectroscopies, complemented with DFT and TD-DFT calculations.These studies showed that three types of Cu(II)/IC complexes, with 1:1, 1:2 and 2:1 metal:ligand stoichiometries, are formed in aqueous solution and are favored in the pH range 6-7.These complexes are stable over time, and a polymeric arrangement involving repetition of the 1:2 complex unit was proposed to form in the solid state.The combination of these spectroscopic techniques with computational methods allowed the structures of the complexes formed to be characterized for the first time in appreciable detail.Information on their metal center coordination geometries, coordination sites of the ligand and full geometries are provided based on DFT calculations.The 1:1 and the 2:1 complexes show distorted tetrahedral metal centers, while the 1:2 complex has a square pyramidal geometry.The DFT and NMR results also reveal that antiferromagnetic coupling occurs between the two metal centers of the 2:1 complex observed in aqueous solution.These results are anticipated to provide crucial structural data relevant for subsequent studies of the physiological effects of the IC food dye.These studies will certainly be important in determining the origin of the reported side effects related to exposure to IC.
Scheme 1 .
Scheme 1. Dianionic indigo carmine (IC 2− ) and numbering scheme used in the discussion of results.
Figure 10 .
Figure 10.ATR-FTIR spectra (1800-400 cm −1 ) of the solid powder samples obtained from an IC aqueous solution (bottom) and from a Cu(II):IC 5:10 mmol dm −3 aqueous solution (top), both at pH 8, in comparison with the DFT/B3LYP calculated spectra for the trans IC conformer (middle bottom) and the 1:2 Cu(II):IC structure (middle top).The vibrational frequencies of the theoretical spectra were scaled by 0.984 and 0.986, respectively, for the ligand and complex spectra (Figures S3 and S15).
Table 1 .
Symmetries , relative Gibbs energies (ΔG298K) in kJ mol −1 at 298.15 K and equilibrium populations (P298K) in percentage, estimated from the relative Gibbs energies, calculated for the most stable forms of indigo carmine (B3LYP/6-311++G(d,p), taking into account the bulk solvent effects of water).
Table 1 .
Symmetries, relative Gibbs energies (∆G 298K ) in kJ mol −1 at 298.15 K and equilibrium populations (P 298K ) in percentage, estimated from the relative Gibbs energies, calculated for the most stable forms of indigo carmine (B3LYP/6-311++G(d,p), taking into account the bulk solvent effects of water).
Table 2 .
Selected structural parameters of the trans isomer of indigo carmine calculated at the B3LYP/6-311++G(d,p) level taking into account the bulk solvent effects of water.
Table 3 .
Experimental 1 H NMR chemical shifts (δ/ppm) (in H 2 O/D 2 O) for the free ligand (IC) and complexes a, b and c (atom numbering as shown in Scheme 1).
1 H RMN (exp.) a
a δ values, in ppm, relative to Me 4 Si, using tert-butyl alcohol (δ H = 1.2) as internal reference.b not detected.
Table 3 .
Experimental 1 H NMR chemical shifts (δ/ppm) (in H2O/D2O) for the free ligand (IC) and complexes a, b and c (atom numbering as shown in Scheme 1).
Table 4 .
EDS results for the solid-state Cu(II):IC complex. | 2024-07-10T15:05:41.546Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "1c2cb3b17377679f9b752355cc0daaaad4c487ba",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules29133223",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e0e25ab6e93fc8b02da96c92a6fdcaa043a5a8c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266751515 | pes2o/s2orc | v3-fos-license | Characteristics of the pediatric population with gender incongruence attending specialized care in Cali, Colombia: an observational, descriptive and retrospective study
Background Gender incongruence can often manifest itself from early childhood [Olson KR, Gülgöz S. Child Dev Perspect. 2018;12:93–7. https://doi.org/10.1111/cdep.12268] with a significant psychological impact, altering social and school dynamics without the appropriate care.[Tordoff DM, et al. JAMA Netw Open. 2022;5(2): e220978. https://doi.org/10.1001/jamanetworkopen.2022.0978] Early identification and gender–affirming care are essential to reduce adverse mental health outcomes, such as depression and self-harm [Tordoff DM, et al. JAMA Netw Open. 2022;5(2): e220978. https://doi.org/10.1001/jamanetworkopen.2022.0978]..This study aims to analyze characteristics and to estimate relative frequencies of gender incongruence in a population of children and adolescents receiving gender-affirming care at a high-complexity university hospital located in the third largest city in Colombia. Methods This was a retrospective descriptive study of patients under 18 with gender incongruence that received gender-affirming care between January 2018 and June 2022 at Fundacion Valle del Lili in Cali, Colombia. Sociodemographic and clinical characteristics of 43 patients were assessed, as well as the relative frequencies of gender incongruence. Data analysis was performed with the statistical package STATA®. To determine significant differences between the characteristics of the patients who participated in the study, the Mann‒Whitney U test was performed for numerical variables with non-parametric distribution, while either Pearson's Chi-2 test or Fisher's exact test was performed for categorical variables. Results For every ten individuals assigned female at birth, who manifested gender incongruence, there were eight assigned male at birth. The median age of onset of gender incongruence was ten years (IQR: 5–13 years), and the median time elapsed between the reported onset of gender incongruence and the first consultation with a multidisciplinary gender-affirming team was three years (IQR: 1–10 years). The frequency of transgender identity was notable in participants with ages between 15 and 17 years. Depressive symptoms, anxiety, and psychotropic drug use were significantly higher in individuals assigned female at birth. Among 25 individuals assigned female at birth who participated in this study, 60% self-recognized as transgender men.18 individuals assigned male at birth, 67% self-recognized as transgender women. The most frequent treatment was a referral to mental health services (46.51%). Conclusion Based on the cohort of our study, we can conclude that patients consult for gender-affirming treatment 3 years after the onset of gender incongruence. Anxiety and depression were higher in individuals assigned female at birth. Additionally, they presented at a later stage of sexual maturation, reducing the possibility of using puberty blockers.
Background
Gender incongruence is defined as a condition in which the gender identity of an individual does not line up with the gender assigned at birth [1][2][3].Psychological, familial, occupational, and social concerns often arise with experiences of gender incongruence due to conflicts with established cultural expectations for the assigned sex [4].Some studies indicate that gender identity develops between the ages of three and five years [2][3][4][5] but many transgender and non-binary youth begin experiencing gender incongruence at or around puberty [3]. .However, some individuals may exhibit diverse gender expressions from childhood as an expected aspect of human developmental behavior, which may not necessarily reflect gender incongruence or transgender identity.There are few epidemiological studies of gender incongruence in childhood [4,6,7].The first studies published in the 1960s reported the prevalence of transgender men to be 1/100,000 and transgender women to be 1/400,000.Later studies have shown a higher prevalence, especially in the European population, with values of 1/11,900 for transgender men individuals and 1/30,400 for transgender women individuals [5-8, 8, 9, 9-12].Currently, there are no studies describing the sociodemographic, clinical characteristics and the incidence or prevalence of gender incongruence in the childhood population of Colombia [10,13].
Knowledge of the baseline characteristics and trends in gender identity in this pediatric population helps to formulate specific care strategies and reduce poor outcomes in certain health indicators, such as, depression, anxiety, suicide and self-destructive behaviours [2,14].
Given the scarcity of studies in developing countries that examine the psychological and emotional impact of gender identity incongruence in children and adolescents, this study was designed to carry out a sociodemographic characterization and estimate the frequency of gender incongruence in children and adolescents seeking gender-affirming care in a high complexity university hospital in the third largest city in Colombia.
Study design
This was an observational, descriptive, and retrospective study based on the review of electronic medical records of patients under 18 with gender incongruence (ICD-10 code: F64-F642-F648-F649-F668) who received genderaffirming care between January 1, 2018, and June 30, 2022, at the Hospital Universitario Fundación Valle del Lili in Cali, Colombia."Gender incongruence" is included in the new International Classification of Diseases ICD-11, and the objective of the condition is not to improve or eliminate symptoms but to facilitate gender-affirming care [10,15].However, in the current study, we used the international ICD-10 classification because this classification is still in effect within the institution.
Data collection procedures and techniques
Medical records of those individuals seeking genderaffirming care diagnosed with cognitive deficits and those containing incomplete sociodemographic and clinical information were excluded from the study.A database was created in BD Clinic to record the information.The analysis included the following sociodemographic variables: age in years during the first specialized consultation, age in years at the reported onset of gender incongruence, years elapsed since the first assessment, nationality, and socioeconomic level according to the DANE (Colombia National Administrative Department of Statistics), based on the conditions of the housing in which the family group resides and the environment or area in which the housing is located.For this reason, the socioeconomic level is classified into: Stratum 1 (Lowlow), stratum 2 (Low), stratum 3 (Lower-middle), stratum 4 (Middle), stratum 5 (Upper-middle) and stratum 6 (High).
Common comorbidities, such as anxiety, depression, self-harm, suicide attempt, use of psychotropic drugs, family history of depression and anxiety, autism spectrum disorder (ASD), and use of psychoactive substances, were also included.Additionally, the individual's Tanner scale stage at the first assessment and the type of gender-affirmative treatment received were recorded.Of the 46 individuals who sought care at the gender clinic during the study period, 3 were excluded for not meeting the selection criteria.Thus, 43 patients were enrolled in the study.
Statistical analysis
Data analysis was performed with the statistical package STATA ® version 16.0.Initially, the normality of the quantitative variables was evaluated with the Shapiro-Wilks test.The median and interquartile range were calculated for numerical variables with a nonparametric distribution, and the absolute and relative frequencies were calculated for categorical variables.To determine significant differences between the characteristics of the women and men who participated in the study, the Mann-Whitney U test was performed for numerical variables, while either Pearson's Chi-2 test or Fisher's exact test was performed for categorical variables.For all tests, the level of statistical significance was defined as a p value < 0.05.The prevalence of gender incongruence was estimated with 95% confidence intervals for different sociodemographic categories including the sex assigned at birth, age group, and presence of a pathological or pharmacological history.
Sample characteristics
Between 2018 and 2022, 43 participants between the ages of 3 and 17 were evaluated in the gender clinic.58% were assigned female at birth and 42% were assigned a male at birth.Half of the individuals were younger than 15 at the first assessment, with no statistically significant differences found between the two biological sexes.For every ten individuals assigned female at birth who manifested gender incongruence, there were eight assigned male at birth.The median age of onset of gender incongruence, was 10 years for the total sample (IQR: 5-13 years), with 11 years for half of the females (IQR: 6-13 years) and 5 years for half of the males (IQR: 4-13 years).The time between incongruence onset and first admission to the gender clinic was 3 years (IQR: 1-10 years).Most of the subjects analyzed were born in Colombia, and more than half came from families with a middle socioeconomic level (strata 3 and 4) (Table 1).The occurrence of depression, anxiety, and the use of psychotropic drugs was statistically more significant in individuals assigned female at birth.Other behaviors, such as self-injury, suicide attempts, family history of depression and anxiety, and the use of psychoactive substances, were similar in both sexes.
The Tanner scale performed during the first consultation showed that 63% of the participants were classified as stage V, that is, with external physical and sexual characteristics (breasts, testicular volume, and pubic hair development) that correspond to complete sexual development.Breaking it down by birth assigned sex, 88% assigned female at birth and 28% assigned male at birth in this study were classified as Tanner stage V, with statistical significance between both groups.Regarding the type of treatment provided, significant differences were found in both sexes; a high percentage assigned female at birth received mental health treatment while a high percentage assigned male at birth received treatment with GnRH analogs (Table 1).The trend in the use of GnRH as hormonal treatment in individuals assigned male at birth is related with earlier Tanner stages observed during the initial evaluation when the level of sexual characteristic maturation is still incipient (Tanner I, II, III).In contrast, treatment by mental health was more common among individuals assigned female at birth, as most of these patients presented for the initial assessment with complete sexual maturation (Tanner V).
Among patients with a history of depression, anxiety, and psychotropic drug use, nearly half or more than half identified as transgender men (Table 2).
Discussion
This retrospective descriptive study is the first to provide demographic and clinical data and the frequency of gender incongruence in a sample of children and adolescents evaluated in the gender clinic of a university hospital in Cali, Colombia.Of the 43 patients, 58% were assigned female at birth and 42% were assigned male at birth.This higher representation assigned female at birth has also been reported in other studies [15,16].This trend is thought to be because assigned female at birth with gender incongruence are more socially accepted than those assigned male at birth.[15,16].
The results of the study show that gender incongruence is present in the pediatric population, with a higher percentage of patients assigned male at birth identifying as transgender women (67%) and assigned female at birth identifying as transgender men (60%).This data is consistent with reports from other studies where the prevalence of transgender identity in assigned men at birth tends to be higher than those assigned female at birth.[7,[11][12][13][14][18][19][20][21].
Age was a determining factor in the identification of gender incongruence in this cohort.Subjects between 15 and 17 years of age more frequently expressed gender incongruence compared to patients between 10 and 14 years of age.A similar trend was observed in an analytical longitudinal study conducted in the Netherlands, in which 37% of children who expressed gender incongruence between 5 and 17 years of age continued to endorse a sense of gender nonconformity after several years of follow-up [17,22].Although our data do not include the follow-up overtime necessary to assess the persistence of gender incongruence from younger ages, it allowed us to identify, with clinical criteria, the age groups where many adolescents recognize their gender identity.The evolution of gender incongruence is a topic undergoing investigation.Different prospective observational studies have been carried out to evaluate the persistence of gender incongruence in children and adolescents into adulthood, and these persistence rates ranged from 27 to 81% [28].Factors associated with persistence are still under investigation, but some that have been identified include higher intensities of gender dysphoria and more body dissatisfaction [28].
The early onset of gender incongruence in our study population is another significant finding.Half of the participants began to experience gender incongruence approximately at age ten years, with five years of age being earliest onset found in those assigned males at birth.The time elapsed between onset of incongruence and the first consultation or first care by a multidisciplinary team of experts was three years for all patients.Half of the subjects came to the first consultation when physical pubertal stage development was advanced.Gender incongruence tended to be more defined in this stage, especially for those assigned females at birth.
A significantly higher frequency of anxiety, depression, and psychotropic drug use was-those who identified as transgender men.Different studies have reported higher rates of psychiatric comorbidities in gender-nonconforming pediatric populations [20,21,23,24], including anxiety, depression [2,5], and suicidal ideation [22,25].A study conducted in Chicago [23,26] revealed significantly lower rates of depression in transgender women than in transgender men.The results show that people assigned female at birth are more likely to suffer emotional distress, due to social expectations and gender inequality.
It is essential to mention that since its inception, the gender clinic at Fundación Valle del Lili has implemented a multidisciplinary approach to patient care, with a team composed of mental health, endocrinology, and primary care physicians who collaborate to assess and establish an accurate diagnosis, make decisions and provide individual recommendations regarding treatment according to age and stage of puberty (prepubertal, mid-pubertal and postpubertal [24][25][26][27].For all stages, the primary treatment for participants with gender nonconformity was mental health intervention that aimed to help the minor and his or her family navigate identity exploration and acceptance, identify possible psychiatric comorbidities, provide strategies to help strengthen integration into their environment, and minimize risk behaviors based on the recommendations provided by WPATH and the gender affirmative model [28,29].
Study limitations
One limitation of the study is the sample size, which makes it difficult to generalize the results.It is essential to know the sociodemographic and clinical profile of children and adolescents with gender incongruence as it will allow us to strengthen health care and treatment in the future, according to the characteristics and needs of our pediatric population.
Conclusion
Based on the cohort of our study, we can conclude that patients consult for gender-affirming treatment 3 years after the onset of gender incongruence.However, we are unaware of the reasons for the delay in receiving genderaffirming treatment.Additionally, individuals assigned female at birth present higher percentage of anxiety and depression, also arrived with a complete level of sexual maturation, which reduces the possibility of using puberty blockers.
Table 1
Characteristics of pediatric patients with gender identity incongruence
Table 2
Characteristics of pediatric patients with gender identity incongruence | 2024-01-05T05:12:04.596Z | 2024-01-03T00:00:00.000 | {
"year": 2024,
"sha1": "966e03ca9875c0916819b1d967828b0ef46c8d74",
"oa_license": "CCBY",
"oa_url": "https://capmh.biomedcentral.com/counter/pdf/10.1186/s13034-023-00689-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "966e03ca9875c0916819b1d967828b0ef46c8d74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247559519 | pes2o/s2orc | v3-fos-license | Decentering the Subject, Psychoanalytically: Researching Imaginary Spacings through Image-Based Interviews
Since the more-than-human turn, geographers have increasingly called for a decentering of the human subject by breaking away from a classically modern understanding of subjectivity and by treating humans as one of many players. In this article, we offer an alternative way of decentering the subject by following the psychoanalyst Jacques Lacan. Far from being subject-centered, psychoanalysis aims to understand the subject as a radically decentered and fragile production, which is only secured through what Lacan calls the imaginary. The imaginary combines two realms—image and imagination—and focuses on how the subject generates a sense of the self through spatial identification with images. Based on image-based interviews conducted in Singapore, Vancouver, and Berlin following the method of photo-elicitation, we demonstrate how this imaginary subject can be empirically investigated. We identify five stages in the interviews that help us retrace how the subject establishes an imaginary relationship with an image as well as how it is confronted with the fragile constitution of this relationship. We conclude by emphasizing the potential of image-based interviews to investigate the decentering of subjects and explore ways in which geographers can further decenter the subject psychoanalytically.
O ne of the most significant changes in human geography over the last two decades is the place it assigns to the human subject. In the course of the "ontological turn," "material turn," and "posthuman turn," geography went from "rethinking the 'human' in human geography" (Whatmore 1999) to "decentring the human in human geography" (K. Anderson 2014). Calls to decenter the human subject dominate much of today's disciplinary agenda and draw their strength from renewed attention to objects, nonhumans, and all kinds of other morethan-human actors (for an overview, see also B. Anderson and Harrison 2010;K. Anderson 2014;Ash and Simpson 2016;Simpson 2017;Kinkaid 2021). Overall, what unites the various approaches introduced to human geography in the last two decades, from actor-network theory, nonrepresentational theory, and object-oriented philosophy to new materialism, speculative realism and postphenomenology, is "a move away from a subject-centered approach to experience" (Ash and Simpson 2016, 53).
The subject is not entirely eliminated but still maintains a place, albeit decentered, in human geography. In fact, a number of more-than-human geographers, especially from the field of cultural geographies, have written about subjectivity in recent years (see Wylie 2010;Dawney 2013;K. Anderson 2014;Larsen and Johnson 2016;Simpson 2017). More-than-human approaches do not want to abandon the subject altogether because their problem is not "the human," as such, but a particular kind of (human) subjectivity: "the thinking subject: the cogito (I think) that Descartes identified as ontologically other than matter" (Coole and Frost 2010, 8; see also K. Anderson 2014). It is this subject that geography has vehemently attempted to decenter in the past two decades, a supposedly rational, independent, and free vision of the human being that places itself above and not beside other beings. What more-than-human geographers call for, then, is not a geography that simply rejects the human subject but one that gives it its proper-that is, decentered-place. As Dawney (2013) put it, "the subject needs to resurface as a decentred site, a site through which to explore the affective webs of relation that give shape to lives, and through which sense is made of lives lived. In repositioning the subject at a decentred centre, it can become a catalyst for academic knowledge production" (635).
In this article, we argue for an alternative way of decentering the subject in human geography following the psychoanalyst Jacques Lacan. This might sound surprising, because psychoanalysis seems to be one of those approaches that is primarily, even exclusively, centered on the human subject. How can such an approach teach us anything about decentering the subject? Sigmund Freud already recognized the significant role of decentering for psychoanalysis in his famous comparison of psychoanalysis with the Copernican turn. With Copernicus, humans already had to learn "that our earth was not the centre of the universe but only a tiny fragment of a cosmic system"; with psychoanalysis, "human megalomania" suffered again, as it "seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in its mind" (Freud 1981, 284-85). Lacan embraced this idea to develop a radically decentered concept of the subject. Subverting, or rather extending, the Cartesian cogito, Lacan (2006) stated, "I am thinking where I am not, therefore I am where I am not thinking" (430). Psychoanalysis therefore decenters the human subject, not only with regard to morethan-human others (as this was already done by Copernicus) but with regard to the human itself: psychoanalysis decenters the subject from within by understanding the (unconscious) subject as being situated outside the (conscious) mind.
In philosophy, a debate has just begun about whether the posthuman call for a decentering of the subject bypasses the subject of psychoanalysis. Against the stance of more-than-human approaches to decenter the subject qua Cartesian cogito, Lacanian philosophers highlight that "such a subject was already decentered long ago … by psychoanalysis" (Sbriglia and Zi zek 2020, 7). In this article, we take up this thought and apply it to the potential of psychoanalytically decentering the subject through and within geographical research. Nevertheless, it is important to acknowledge that this is by far not the first attempt in human geography to take the psychoanalytic decentering of the subject into account-in fact, when geographers began to decenter the subject, they openly drew on insights of (Lacanian) psychoanalysis (see Pile and Thrift 1995;Blum and Nast 1996;Pile 1996). This influence of psychoanalysis remains largely neglected, however, in the canon of more-than-human approaches in geography. In this article, we therefore want to bring the psychoanalytic legacy of decentering the subject back to light. 1 We focus on Lacan's concept of "the imaginary," which has been one of the early entry points for geographers to engage with the works of Lacan in the 1990s (see Rose 1995;Blum and Nast 1996;Pile 1996), but is still sometimes considered as "rarely given any formal theoretical inflection" (Gregory 2009, 282) in geography. We offer such a formal theoretical inflection by focusing on the role of the image as a defining criterion of the Lacanian imaginary that has often been neglected in favor of its illusory and phantasmatic dimension. Through the imaginary, Lacan developed the idea that the subject is based on spatial identification, internalizing an external image to establish an utterly decentered self-identity. We theoretically reflect on Lacan's imaginary as an approach to the "spacing of the subject." This phrase stems from Simpson (2017), who used it as the main aim of every geographical decentering of the subject: "'spacing' is taken as an active and ongoing process, a movement of differing and deferral, where 'the subject' is always already in relation to what it is not, always emerging from these relations, but where such relations are by no means fixed or certain" (6). We seek to show how the Lacanian imaginary is perfectly suited to this "spacing of the subject" and that psychoanalysis therefore deserves more than a side note and instead should be equally considered " [o]ne of the key drivers in thinking critically about the subject in geography" (Simpson 2017, 3). Subsequently, we demonstrate how Lacan's decentered subject can be empirically investigated based on image-based interviews conducted between 2018 and 2020 in Singapore, Vancouver, and Berlin, that applied the method of photo-elicitation. By focusing on the image of a room that was used in all of the interviews, we identify five stages in the interview process-description, interpretation, identification, questioning, and traversal-to retrace how the interviewees establish an imaginary relationship with the image to generate a coherent self-image as well as how they are confronted with the fragile constitution of this relationship. We conclude by emphasizing the potential of psychoanalysis to investigate the spacing of the subject as a way to allow geographers to further engage with the intrinsic relationship between image, fantasy, and space.
Lacan's Imaginary Spacing of the Subject
A basic entry point into psychoanalysis is the splitting of the subject. Psychoanalysis assumes that humans are fractured, inconsistent, and conflicted beings rather than complete, consistent, and stable ones. Against this background, Lacan aimed to understand how the subject develops and maintains a conception of the self in the first place, what Freud called the "ego." The ego is the realm of the "I" (moi) and denotes the domain of psychoanalytic thinking most closely linked to everyday understandings of identity or individuality. Lacan (1991b) considered this realm "the seat of illusions" (62), because it allows the subject to construct a coherent sense of the self. To better understand how the subject generates this illusionary sense of the self, Lacan introduced "the imaginary" as one of the three main registers through which he unfolded his theory of the subject (next to the symbolic and the real). The imaginary allows the subject to imagine itself by providing it with an image of unity, coherence, or completeness, despite its inconsistent configuration. For Lacan (2013, 35), then, imaginary basically means a linkage between image and imagination:
Imagination Image
At the origin of the imaginary, Lacan situated the "mirror stage," the moment when the child looks into the mirror and starts to assume that it is seeing "itself" (and not just a reflection). This moment is crucial for Lacan, because he insisted that humans are born with a fragmented body, a "body in pieces," and that it is only after the mirror stage that they conceive a coherent and consistent, yet "orthopedic," image of the self: "[T]he mirror stage is a drama … and, for the subject caught up in the lure of spatial identification, turns out fantasies that proceed from a fragmented image of the body to what I will call an 'orthopedic' form of its totality" (Lacan 2006, 78).
Although the ego, for Lacan, depends on a projection of the self in the mirror, it does not require a mirror in the literal sense of the term: "all sorts of things in the world behave like mirrors" (Lacan 1991a, 49). What it takes is an image through which the subjects are able to perceive themselves; in other words, an image through which the subjects get caught up in the lure of spatial identification. The early Lacan work also uses the term imago to clarify this point. Imagos are external images with which an individual identifies to establish an imaginary identity. Imagos function as our images of who we are (ego) and who we want to be (ideal ego). The individual, for Lacan, thus only assumes an identity through decentering one's self via the image of the other. As emphasized in one of the early milestones introducing Lacan to human geography, "Subjectivity [for Lacan] is spatially and ontologically decentered; the subject is shaped literally from the outside in" (Blum and Nast 1996, 564, italics in original). It requires spatial identification with an outer image to become oneself. Otherwise, strictly speaking, the subject (qua ego) does not exist: "The subject is no one. It is decomposed, in pieces. And it is jammed, sucked in by the image, the deceiving and realised image, of the other. … That is where it finds its unity" (Lacan 1991a, 54).
The whole point of Lacan's imaginary is to understand how the individual generates an imaginary space of the self by tying together its intimate fantasies (of coherence, unity, stability, etc.) with an external image. Lacan spoke of this process as a "drama" because the imaginary superimposes identification with alienation: "Alienation is constitutive of the imaginary order. Alienation is the imaginary as such" (Lacan 1997, 146). Although the subject can establish self-identity only through its identification with an external image, this image never becomes "fully" part of the self. The imaginary therefore makes it structurally impossible for the subject to achieve "full" self-identity due to the impossibility of a complete internalization of the image: "To Lacan, this proves that the imaginary is not very well accommodated in human beings. A human being can couple his or her image to basically any object in the environment; no object is perfectly suitable to complement a human being's self-image" (Nobus 1999, 116).
Lacan offered us a weak notion of self-identity based on the impossibility of an ultimate linkage between imagination and image. For Lacan, psychoanalysis therefore stands in ultimate opposition to every attempt to strengthen the ego, commonly known as "ego-psychology," which Lacan (2006, 336) considered the "antithesis" of any true psychoanalysis, whose ultimate aim is not a strengthening but a weakening of the ego and all areas affected by it: "Not only is the conscious identity or individuality (or ego) of the subject dramatically decentralized and deprioritized-viewed in fact as a type of symptom or mirage-but so is the whole field of meanings and (self-)understandings premised upon such an egoic (or 'imaginary') basis" (Hook 2018, 4). By offering us a decentralized and deprioritized notion of the (ego of the) subject, Lacanian psychoanalysis becomes an ultimate forerunner of what is usually claimed to be the insight of the more-than-human turn. If decentering the subject, in human geography and elsewhere, means "to treat the figure of the thinking human subject … [as an] always-fragile production" (K. Anderson 2014, 14), we insist on the psychoanalytic traversal of the ego as a fruitful approach to fulfill this task.
Getting Caught up in the Lure of Spatial Identification
One should always provide a little illustration for what one discusses.
- Lacan (2013, 34) We now demonstrate how geographical research can empirically scrutinize this decentered subject as a spacing maneuver. In an ongoing research project that engages with emotional and affective dimensions of security-related geographical imaginations, our research team conducted 169 interviews in Berlin, Vancouver, and Singapore with people from a variety of social classes and age groups to speak about the security-related issues and challenges they face in their urban everyday lives. We chose to focus on security and insecurity in our research because they are crucial contributors to the construction of various subject positions (based on age, class, gender, etc.) and are part and parcel of geographical imaginations of urban life. Therefore, we scrutinized the (in)securing aspects emanating from housing and home-making ) and analyzed the importance of geopolitical positioning with respect to political caesuras for everyday perceptions of security (Genz et al. 2021; for an overall summary of this project's research agenda, see also Helbrecht et al. 2022). What we want to focus on in the following is the methodological approach used in our research, specifically how the use of images in the interviews gave us access to the intimate space of self-positioning. The interviews followed the approach of photoelicitation, which is one of the two main strands of image-based interviews, often defined in contrast to "reflexive photography" (Harper 2002). Although photo-elicitation is certainly not new to geographers and is even considered alongside reflexive photography as "well established and time-honored staples in the photography toolkit of geographers" (Sanders 2020, 101), we extend the scope of previous uses of this method by demonstrating that photo-elicitation is particularly suited to investigating the decentering of the subject. Photo-elicitation allows us to reveal how subjects internalize an external image by using it as a fantasy screen for projecting their desires. In reflexive photography, the subject already identifies with the image when entering the interview (because it is the interviewee who takes the photographs used in the interview); in photo-elicitation, the researcher actively participates in the process by which the interviewee gets decentered, creating an imaginary unity through the image of the other.
For our research project, several photographs were used that not only depict different scales and types of space (from rooms and squares to borders and outer space) but also leave room for "free associations." Apart from the selection of images, the interviews followed a very open approach, with the images being shown to the interviewee one after the other as broad questions are asked like, "What do you see in this image?" or "What feelings does this image trigger?" In the following, we provide a type of best-case scenario for this research method. Therefore, we arranged several moments from the 169 interviews in a way that allows us to differentiate what we consider five elementary steps for researching the imaginary spacings of the subject through image-based interviews. We describe the following five stages in more detail in what follows: 1. Description: The interviewee describes the manifest content of the image. 2. Interpretation: The interviewee tries to make sense of the image. 3. Identification: The interviewee develops an imaginary relationship with the image. 4. Questioning: The interviewee questions the meaning of the image. 5. Traversal: The interviewee loses the connection to the image.
In the following, we focus on only one of the images used in all three cities, often as the first image to begin the interview (Figure 1). When looking at this image, interviewees often started with a description of the various objects shown in this image (Stage 1). Most prominently, there is the bed in the center, but there are also blankets, books, clothes, a backpack, electricity, some other belongings, and a large photograph of a woman holding a camera, all spread out on the floor around the bed. Although the room was often initially described as full of things, many interviewees nonetheless pointed to a certain "emptiness" that distinguished this image. For instance, when asked what they saw in the image, one immediate response was, "an empty, tiled room" (Ber19_35). After first carefully describing the interior of the room, another interviewee stressed more emphatically that it was the absence of humans that turned the room into an empty room: So one looks into a room: white walls, tiled, one sees a bed, a single bed. … I don't know, some clothes are lying around and a box, a big picture on the wall on the right, cables, sockets, books. But Researching Imaginary Spacings through Image-Based Interviews it's relatively dark I would say. Yes and empty. So no people in the picture. (Ber41_17) The reason why this image was quite appealing for many interviewees is that it revolves around an absence, a lack, and that fantasy is needed to cover this lack. As one of the interviewees aptly pointed out when looking at the image, "This is a place where one wants to know what poor person actually lives there" (Ber30_15). Against this background, many interviewees, after describing the manifest content of the image in the first stage of the interview, quickly started to fantasize about the person who might live in this room (Stage 2): The assumptions regarding the specific shape of the owner of this room diverged just as widely as the ideas about what kind of room this is. Is this a place someone calls home or only a short-term overnight accommodation? Is it owned by a woman or a man? Is its resident going through a rough time or is this just a messy place of a student? Although the interviewers did not provide a clear answer to this question, instead insisting on the ambiguity inherent to this image, we emphasize that at this stage of the interview, most of the interviewees had developed quite a precise idea about who owns the room pictured in the image by using their imagination. A compelling example of how fantasy comes into play to give meaning to the image is this quote from Vancouver. When the interviewee was asked what she saw in the image, she said: It's very emotional. What I see in this picture is somebody who doesn't have a lot of money but has a lot of strong personal connections. I see that, you know, I don't know if it's a man and that's the girlfriend [pointing to the photograph next to the bed] or somebody they really admire, but there's obviously some connection there and I think that's [pointing to the blanket next to the bed] where a dog would sleep. … I see somebody who has not very much but yet is connected to people and pets. (Van10_96) After the absent owner of the room has taken shape in the interviewee's imagination, we enter a next stage of the interview (Stage 3), in which we shift from image to imago and the interviewee gets caught up in the lure of spatial identification. Now the image is no longer just an image but becomes something through which the interviewees face themselves to establish their self-identity. This moment of spatial identification, where the image is internalized by the subject and considered as part of the self, functioned primarily in two ways: either through an emphasis on the similarities between the empty room and the interviewee's own way of life or through an insistence on the differences between the two. For instance, when asked what he felt when looking at the image, one interviewee from Berlin stated, This is me when I was sixteen again. That's sort of what my first apartment was like, not quite as bad maybe. … It was glorious. I'm a man, you know. At that time, everything still worked with the ladies. Perhaps things were less complicated in those days, I don't know. (Ber03_66) Whereas this man in his sixties emphasized the similarities between the empty room and his first apartment by nostalgically thinking of the time when he was still a teenager and things were supposedly "less complicated" than today, another interviewee from Singapore, a woman in her mid-thirties, contrasted the empty room with her current apartment to highlight that she desires a clean and personal environment to enjoy herself: In these cases, the image establishes a realm of the "ideal ego." Here, the empty room opens the fantasy space for an "identification with the image in which we appear likeable to ourselves, with the image representing 'what we would like to be'" ( Zi zek 1989, 116). Regardless of whether the image functions in contrast to, or in support of, the subject's imaginary self-identity, it is thus crucial to insist that the interviewees enter a self-decentering process through which they establish a coherent sense of themselves. At this stage, the image "undermines our position as 'neutral,' 'objective' observer, pinning us to the observed object itself. This is the point at which the observer is already included, inscribed in the observed scene-in a way, it is the point from which the picture itself looks back at us" ( Zi zek 1991, 91). After the third stage of the interview allowed us to grasp the subject's imaginary self-identity as spatially and ontologically decentered-the subject as shaped from the outside in-we entered a next stage of the interview (Stage 4), in which the interviewees were confronted with the alienated nature of their decentered self. Shortly after the interviewees came to their first conclusion about what they saw (of themselves) in the image, many of them delved deeper into the empty room and stumbled across certain details that did not fit. Like a detective who enters a crime scene to scan its superficial appearance for clues to what really happened there, the interviewees began unmasking "the imaginary unity" of their own imago by discovering "inconspicuous details that stick out, that do not fit into the frame of the surface image" ( Zi zek 1991, 53). The detail interviewees mentioned most often was the photograph of the woman with a camera leaning against the wall next to the bed.
So what doesn't fit in there is the big picture with the woman with the camera. (Ber40_53) So, if the picture of the woman were not there, I would simply say that this is the room of a person who has just moved in, who just can't really afford a lot of furniture yet or who really likes a sort of minimalism, and just doesn't want to have any furniture at all, except for a bed, and that's not even a real bed. But the thing with the big picture is a bit strange. (Ber35_27) For many interviewees, the picture of the woman holding a camera rendered the empty room suspicious. If this room is a person's home, why is the picture not hung on the wall? If the person only sleeps here temporarily (e.g., while the rest of the apartment is being renovated), why put the picture there in the first place? If the owner of the room is poor, how can she or he afford this picture and not sell it? Who is that woman? Is she just a random model or someone the person knows? Is the person admiring her? Is this the home of a stalker? All of these questions raised in the interviews testify to the weak linkage of imagination and image, which is why the interviewees suddenly find themselves at this stage confronted with a realm of total ambiguity. The picture of the woman is "the detail that 'does not fit,' that 'sticks out' from the idyllic surface scene and denatures it … and thus opens up the abyss of the search for a meaning" ( Zi zek 1991, 90-91). Although the interviewees were initially convinced that they knew the meaning of the image of the empty room, they are now faced with the impossibility of "really" knowing what this image is about. Instead of opening a fantasy "space wherein they could project their nostalgic desires, their distorted memories" ( Zi zek 1991, 9), the image now points to the alienated condition of the imaginary. From now on, the interviewee might be able to see everything, or rather nothing, in the image. As captured in one example from Berlin, the interviewee stumbled from one detail of the room to another just to come to the desperate conclusion that he cannot say what he is looking at: This might be an old building. Although the tiled wall doesn't fit. No, and the plugs do not fit either. That's something else. It could be a garage, I don't know. It could be anything. I don't know, I really don't, I don't know. It could be anything. … My fantasy is going wild right now. (Ber03_66) The empty room loses its fantasmatic presence and the image is exposed as "a screen masking a void" ( Zi zek 1989, 141). This final stage of the imagebased interview (Stage 5) mirrors Lacan's notion of the final moment of the psychoanalytic treatment when the subject "traverses the fantasy" to experience "the fact that the fantasy-object, by its fascinating presence, is merely filling out a lack" and that "[t]here is nothing 'behind' the fantasy" ( Zi zek 1989, 148). The moment when the interviewee stated that his "fantasy is going wild" is precisely when he traversed the fantasy, the moment "when the coordinates of the fantasy space are lost via hysterical breakdown" ( Zi zek 1991, 66), so that the image of the empty room turns out to be the lure it always was.
Where Is the Subject?
"Questions around the subject and its decentering have become increasingly established as matters of concern for human geography" (Simpson 2017, 9). Although most geographers today refer to the advantages of more-than-human approaches, with the rising interest in decentering the subject, our article emphasizes psychoanalysis as a suitable approach for geographers to fulfill this task. We thus aim to enrich the recent interest of geographers in a decentering of the subject with a Lacanian psychoanalytic perspective, because it has much to offer for "displacing the thinking human subject" (K. Anderson 2014, 5). 2 Far from being subject-centered, psychoanalysis engages the subject as an always-fragile and decentered production, which we cannot approach directly but only by taking into account how the subject relates to others. We find the truth of the subject not by digging deep down into its mind but rather by searching for the subject's most intimate kernel as being located outside of the subject. "This is what the Lacanian notion of 'd ecentrement', of the decentered subject, aims at: my most intimate feelings can be radically externalized" ( Zi zek 2008, 141; see also Kingsbury 2007). The imaginary constitutes a key category to further develop this thought, because it allows us to take into account how the self is based on a spatial interweaving of imagination (inside) and image (outside). A psychoanalytic approach therefore situates decentering not only in the relationship between humans and nonhumans but also within the human and its spacing (it)self. The problem for psychoanalysis is not so much that there are other actors besides humans who have agency but rather that humans themselves have no genuine agency over themselves as they are shaped literally from the outside in.
In our case study, we carved out five typical stages within the interviews that allow us to elaborate the functioning of the imaginary spacing of the subject: 1. Description: In this stage, the interviewees focus on an "objective" description of the image by pointing out what they see. 2. Interpretation: The interviewees try to make sense of the image through their fantasy. 3. Identification: The interviewees pass over from looking at the image to being looked at by the image. At this stage, image and imagination are successfully linked and the image functions as a mirror through which the interviewee gets caught up in the lure of spatial identification. 4. Questioning: The interviewees are confronted with the inconsistent and fragile nature of the linkage of image and imagination by stumbling across details in the image that do not fit and thus derail the imaginary relationship. 5. Traversal: The interviewees lose their connection to the image and thus "traverse the fantasy" that formerly provided the image with meaning.
Of course, not every image-based interview passes through all five stages, but we consider all stages crucial for engaging the decentering (i.e., spacing) of the subject psychoanalytically. Only when the imaginary space between the ego and the image is not only established in the interview but also traversed can we successfully demonstrate how fantasy both orients and disorients the subject (Pohl 2020). Following on from this, we hope to stimulate further research in geography that aims at decentering the subject through image-based interviews, as the image can function both as a realm of spatial identification that secures the subject by offering a sense of identity, as well as opening up the possibility of engaging with the ultimately inconsistent and illusory configuration of that identity. Since "all sorts of things in the world behave like mirrors" (Lacan 1991a, 49), there are numerous images geographers could use to trace the imaginary spacings of the subject. How does the subject identify with media representations and virtual spaces, from commercials to pictures posted on social media? How do images of artistic expression, from fine art to street art, function as reflections of the ego and the ideal ego?
In what forms of political images can the subject be mirrored? These and many other questions could in the future become part of a geography dedicated to the decentering of the subject. | 2022-03-20T15:18:54.221Z | 2022-03-18T00:00:00.000 | {
"year": 2022,
"sha1": "03bc29ceba5836be6c83f2a2d066b68f1e71ef63",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/00330124.2021.2014909?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "234cc5cec4b1508b2004e0dc653404f548b6f6ae",
"s2fieldsofstudy": [
"Psychology",
"Geography"
],
"extfieldsofstudy": []
} |
6772808 | pes2o/s2orc | v3-fos-license | Impact of tobacco smoking on cytokine signaling via interleukin-17A in the peripheral airways
There is excessive accumulation of neutrophils in the airways in chronic obstructive pulmonary disease (COPD) but the underlying mechanisms remain poorly understood. It is known that extracellular cytokine signaling via interleukin (IL)-17A contributes to neutrophil accumulation in the airways but nothing is known about the impact of tobacco smoking on extracellular signaling via IL-17A. Here, we characterized the impact of tobacco smoking on extracellular cytokine signaling via IL-17A in the peripheral airways in long-term smokers with and without COPD and in occasional smokers before and after short-term exposure to tobacco smoke. We quantified concentrations of IL-17A protein in cell-free bronchoalveolar lavage (BAL) fluid samples (Immuno-quantitative PCR) and cytotoxic T-cells (immunoreactivity for CD8+ and CD3+) in bronchial biopsies. Matrix metalloproteinase-8 and human beta defensin 2 proteins were also quantified (enzyme-linked immunosorbent assay) in the BAL samples. The concentrations of IL-17A in BAL fluid were higher in long-term smokers without COPD compared with nonsmoking healthy controls, whereas those with COPD did not differ significantly from either of the other groups. Short-term exposure to tobacco smoke did not induce sustained alterations in these concentrations in occasional smokers. Long-term smokers displayed higher concentrations of IL-17A than did occasional smokers. Moreover, these concentrations correlated with CD8+ and CD3+ cells in biopsies among long-term smokers with COPD. In healthy nonsmokers, BAL concentrations of matrix metalloproteinase-8 and IL-17A correlated, whereas this was not the case in the pooled group of long-term smokers with and without COPD. In contrast, BAL concentrations of human beta defensin 2 and IL-17A correlated in all study groups. This study implies that long-term but not short-term exposure to tobacco smoke increases extracellular cytokine signaling via IL-17A in the peripheral airways. In the smokers with COPD, this signaling may involve cytotoxic T-cells. Long-term exposure to tobacco smoke leads to a disturbed association of extracellular IL-17A signaling and matrix metalloproteinase-8, of potential importance for the coordination of antibacterial activity.
Introduction
An excessive local accumulation of neutrophils and their potentially tissue damaging effector molecules is a common finding in the peripheral airways of smokers with chronic obstructive pulmonary disease (COPD). 1 While displaying this accumulation of innate immune cells and their effector molecules, patients with COPD at the same time suffer from increased susceptibility to bacterial infections. 2,3 Tentatively, more knowledge about the mechanistic rationale for this immunological phenomenon may prove important for understanding the pathophysiology of COPD.
Interleukin (IL)-17A is a cytokine that can be secreted by cytotoxic and T helper lymphocytes and this cytokine is involved in the recruitment of neutrophils to sites of Dovepress Dovepress 2110 levänen et al infections in mammals. 4 Here, IL-17A stimulates structural cells, such as fibroblasts and epithelial cells, to produce chemokines and growth factors, thereby promoting the local recruitment and accumulation of neutrophils. 5,6 Even though it has previously been shown that intracellular IL-17A protein is expressed in peripheral airway tissue and that this tissue expression is higher in smokers with COPD than in nonsmokers, virtually nothing is known with respect to the impact of tobacco smoking on extracellular cytokine signaling via IL-17A in the peripheral airways, the functionally most critical aspect of cytokine production in this compartment. 7 We hypothesized that extracellular cytokine signaling via IL-17A in the peripheral airways is detrimentally altered by tobacco smoking and that this also alters critical effector molecules of innate effector cells. To address this hypothesis, we characterized the impact of tobacco smoking on extracellular cytokine signaling via IL-17A in the peripheral airways of well-characterized, current, long-term smokers, with and without COPD, and in occasional smokers, before and after short-term exposure to tobacco smoke. Because of the recent claim that cytotoxic T lymphocytes constitute a source of IL-17A in tobacco smokers, we examined the association of extracellular cytokine signaling via IL-17A with immunoreactivity to CD3 + and CD8 + cells in bronchial biopsies. 8 In the peripheral airways, we also quantified matrix metalloproteinase (MMP)-8 and human beta defensin 2 (HBD2), two innate effector molecules, downstream of the coordinating cytokine IL-17A, both being produced by neutrophils. 9,10
Materials and methods ethics
This study was performed on human subjects after informed oral and written consent in accordance with the Declaration of Helsinki. This study and the clinical study protocol were approved after review by the regional ethical review committee in Stockholm (Diary no 2005/733-31/1-4l) and Gothenburg (Diary no S 313-00; T186-02), respectively.
Cohorts
The study included two cohorts of human subjects as described below.
long-term exposure
The first cohort was recruited at Karolinska University Hospital in Stockholm and included long-term, current tobacco smokers (referred to as long-term smokers from here and on) with or without COPD and age-matched nonsmoking control subjects ( 12,13 Current smokers with a FEV 1 /forced vital capacity .0.7 ratio and FEV 1 .80% of predicted values were included in the group of smokers without COPD. Subjects with clinical or laboratory signs of infection within at least 4 weeks prior to bronchoscopy were rescheduled. The control group consisted of nonsmokers with no history of asthma or other lung diseases and all had a normal lung function. At the first visit, a bronchodilation was induced by the inhalation of ipratropiumbromid (0.5 mg) and salbutamol (2.5 mg) via an Aiolos Plug-in ® inhalator device (Aiolos, Karlstad, Sweden). At the second visit, a bronchodilation test was conducted using only salbutamol (0.1 mg) via a MDI and Volumatic ® spacer device (Allen & Hanburys, London, UK). In this cohort, bronchoscopy was performed in accordance with standard procedures as previously described. 14 Bronchoalveolar lavage (BAL) was collected by instilling five aliquots of 50 mL sterile saline and gently reaspirating after each aliquot. 15 After centrifugation of the BAL sample, the cell pellet was separated from the fluid and cell concentration and viability of the cells were determined. The fluid supernatants of these centrifuged samples (referred to as cell-free BAL fluid) were analyzed for concentrations of extracellular, soluble IL-17A protein, as previously described in detail. 16,17 Briefly, before analysis, the fluid was concentrated 20× using Amicon Ultra filters 3K Da (Merck Millipore, Carrigtohill, Co. Cork, Ireland). The concentration of IL-17A protein was then quantified using a high sensitive (limit of detection 1.2-2.5 pg/mL), specific and customized
2111
Tobacco smoking and cytokine signaling via Il-17a immuno-qPCR (TATAA Biocenter ® , Gothenburg, Sweden). For immunohistochemistry of CD3-and CD8-positive cells, bronchial tissue biopsies were obtained from the upper lobe bronchus. The biopsies were fixed in acetone and processed into glycolmethacrylate resin as previously described. 18 Thin (2 µm) sections were cut and immunostained in duplicate using monoclonal antibodies (anti-CD3 [M7254] and anti-CD8 [M7103]) (Dako™ Cytomation, Dako Denmark A/S, Glostrup, Denmark.) and the streptavidin-biotin-peroxidase detection system (Dako™ Cytomation). The concentrations of extracellular, soluble HBD2 (sensitivity 7.8 pg/mL) (Phoenix Pharmaceuticals, Inc, Burlingame, CA, USA) and MMP-8 (sensitivity 78 pg/mL) (R&D Systems, Inc., Minneapolis, MN, USA) proteins were quantified in cell-free BAL fluid using commercially available enzymelinked immunosorbent assay (ELISA) kits. Sample values in the MMP-8 ELISA that were below the standard curve were arbitrarily set to half of the lowest value of the standard curve divided by 20 to compensate for the concentration procedure of the BAL fluid samples. Sample values in the HBD2 ELISA that were below the standard curve were set to half of the lowest value of the standard curve. Two samples in the HBD2 ELISA (from the group of smokers without COPD) were above the highest value of the standard curve and these samples were set at the highest value of the standard curve (ie, 500 pg/mL).
short-term exposure
The second cohort was recruited and investigated at Sahlgrenska University Hospital in Gothenburg. This cohort included healthy occasional tobacco smokers (referred to as occasional smokers from here on) and healthy nonsmoking controls (Table 2). Here, all subjects underwent two bronchoscopies, including BAL; the first at day 1 (termed BAL1) and a second at day 14 (termed BAL2). On days 12 and 13, all occasional smokers smoked $10 filter cigarettes of a commercial brand (Marlboro™, tar 10 mg, nicotine 0.8 mg) that were purchased (not given as a gift). Occasional smokers all smoked at least once a month, but not more than four times a month or more than ten cigarettes per week. Moreover, all occasional smokers had refrained from smoking 4 weeks prior to the first bronchoscopy. The nicotine intake of the occasional smokers was confirmed by measuring cotinine levels in the urine at day 14. 19 A detailed description of this study protocol was included in the original publication by Glader et al. 19 A subgroup of the clinical data from the referred study has been used for the current study as well.
statistics Statistical analyses for more than two groups were performed using Kruskal-Wallis test with Dunn's multiple comparison test as post hoc test. A nonparametric Wilcoxon signed rank test was used for matched pairs and a nonparametric Mann-Whitney t-test, comparing two groups. A P-value ,0.05 was considered statistically significant. Median and range (min-max) were used for all descriptive statistics in the text, whereas median and individual values were used in the figures. The correlations between two observations were tested using the Spearman rank correlation test. Due to the novelty of the key parameters (in particular, the immuno-qPCR for IL-17A), we were unable to perform exact calculations of statistical power.
group-related differences for Il-17a protein long-term smokers
The IL-17A protein was detectable in 55 out of 56 of the cellfree BAL fluid samples of all study groups in the first cohort and these concentrations of IL-17A were clearly higher in long-term smokers without COPD than in nonsmoking healthy control subjects, whereas long-term smokers with COPD displayed corresponding concentrations at a level between the two other groups of cohort 1 (Figure 1). As a result, the pooled group of long-term smokers with and without COPD (15.4 [1.7-95.3] pg/mL, n=36) displayed higher concentrations of IL-17A in cell-free BAL fluid than the nonsmoking healthy control subjects (9.1 [1.6-58.3] pg/mL, n=20) of cohort 1 (P=0.014). Moreover, we did not find any clear, reproducible difference for treatment with (23.6 [5-77.1] pg/mL, n=9) and without (10.2 [1.7-95.3] pg/mL, n=8) corticosteroids (P=0.3655). All the results from these long-term smokers with COPD were therefore pooled and regarded as "ever smokers" in the study. There was no correlation between BAL sample return volumes and the concentrations of IL-17A in cell-free
Occasional smokers
We detected no difference for the concentrations of IL-17A protein in cell-free BAL fluid from occasional smokers before and after short-term exposure to tobacco smoke ( Figure 2A). However, in these occasional smokers of cohort 2, the concentrations of IL-17A measured prior to short-term exposure were higher than the corresponding concentrations in the nonsmoking controls ( Figure 2B). Although these concentrations of IL-17A in occasional smokers prior to short-term exposure were higher than in never-smokers, these concentrations were significantly lower than those in long-term smokers (Figure 3). We performed two bronchoscopies in the nonsmoking controls as well, but here without any cigarette exposure, and found that this procedure per se did not cause any sustained alteration of the concentrations of IL-17A in cell-free BAL fluid (data not shown).
Cytotoxic T lymphocytes in long-term smokers CD8-positive cells
The immunohistochemistry signal for CD8-positive cells (per mm 2 submucosa) in bronchial biopsies and the concentrations of IL-17A in cell-free BAL fluid samples displayed a strong positive correlation in the group of long-term smokers with COPD only ( Figure 4A). Immunohistochemistry images displayed a clear distinction between positive and negative staining ( Figure 4B).
CD3-positive cells
In analogy to CD8-positive cells, the corresponding immunohistochemistry signal for CD3-positive cells and IL-17A BAL concentrations displayed a positive correlation in the group of long-term smokers with COPD (r=0.508, P,0.05, data not shown). Thus, no corresponding correlation was detected in the group of long-term smokers without COPD or in nonsmoking healthy subjects.
Innate effector molecules in long-term smokers
The MMP-8 protein concentration was detectable in 52 out of 56 of the cell-free BAL fluid samples of all study groups in the first cohort. However, the MMP-8 BAL concentrations displayed no statistically significant differences between the study groups in the first cohort (data not shown). There was a correlation between MMP-8 and IL-17A concentrations in the BAL fluid when all three study groups were included ( Figure 5A). Moreover, we observed a strong correlation when we analyzed the group of nonsmokers from the first cohort separately ( Figure 5B) whereas no such correlation was evident in the pooled group of long-term smokers with and without COPD (data not shown). The HBD2 protein concentration was detectable in 55 out of 56 of the cell-free BAL fluid samples of all study groups in the first cohort; they did, however, not display any statistically significant differences between the study groups in the first cohort (data not shown). However, here we found
Discussion
In this first study on cytokine signaling via IL-17A in the peripheral airways of long-and short-term smokers, the results showed that the concentration of extracellular IL-17A protein in cell-free BAL fluid is increased in long-term smokers without COPD, whereas those with COPD only tended to display an increase, the lack of significance possibly due to the limited material available. Moreover, there was no increase in the referred extracellular IL-17A protein after short-term exposure in occasional smokers and this is, to the best of our knowledge, completely novel information on how much smoking is required for the smoking-related alteration in IL-17A signaling. [20][21][22][23][24] We also obtained confirmatory evidence that cytokine signaling via IL-17A may relate to cytotoxic lymphocytes, at least in long-term smokers with COPD. 8 We think that our strict matching of tobacco load and sex among the long-term smokers with and without COPD, as well as our use of a highly sensitive and specific Immuno quantitative-PCR method made our key findings possible. 19 A novel smoking-related mechanism of interest was indicated by the observation that the relationship between extracellular IL-17A signaling and an innate effector molecule in the peripheral airways is altered in long-term smokers. The case was that the correlation between the concentrations of IL-17A and the neutrophil protease MMP-8, a protease that is known to be affected by smoking, was evident in cell-free BAL fluid from nonsmokers but not in the cell-free BAL fluid from the pooled group of long-term smokers with and without COPD. 25,26 It can be speculated that this loss of correlation reflects a disturbance in the IL-17A-mediated control of anti-bacterial activity originating from neutrophils, a possibility that clearly warrants further investigation in mechanistic studies. This is supported by the finding by Roos et al 23 that neutrophil accumulation in the lungs caused by tobacco smoking is dependent on IL-17A and IL-1R1. For the innate effector molecule HBD2, our results argue that the relationship with extracellular IL-17A signaling was intact in all study groups.
When we quantified IL-17A protein concentrations in cell-free BAL fluid from occasional smokers prior to and after exposure, we did not obtain any evidence for a sustained impact of short-term smoking on extracellular cytokine signaling via IL-17A protein. Thus, the observed increase in the concentrations of IL-17A in cell-free BAL fluid in longterm smokers is likely to be the result of long-term exposure mainly. In line with this idea, we also obtained evidence that long-term smoking causes more of an increase in extracellular cytokine signaling via IL-17A than occasional smoking. We did this by comparing concentrations of IL-17A in cell-free BAL fluid from occasional smokers prior to exposure to those in long-term smokers with and without COPD. We found that these concentrations of IL-17A were higher in long-term smokers and that occasional smokers had higher concentrations than nonsmoking healthy control subjects.
We cannot rule out that the lack of a difference in the concentration of IL-17A for long-term smokers with and without COPD relates to the use of corticosteroids in some of the smokers with COPD, even though we failed to prove a statistically significant impact of this treatment. Of note, however, the previous study by Doe et al 20 forwarded evidence supporting the impact of long-term smoking but also the lack of a substantial difference for the smokers with and without COPD, although this study measured immunoreactivity in tissue and not extracellular protein. 20 Another study showed opposing results in terms of an association between COPD progression and elevated IL-17A assessed by immunohistochemical methods on lung tissues. 22 In fact, even though it is original, the assessment of extracellular cytokine signaling via IL-17A in the peripheral airways of long-term smokers that we present here is compatible with all previous studies on IL-17 cytokines in other compartments from central and peripheral airways, although the impact of short-term exposure has not previously been addressed at all. 3,[12][13][14]20 The results of our current study thereby add to the growing body of evidence that forwards IL-17A as a mediator of interest for understanding inflammation caused by tobacco smoking in several compartments of human lungs.
Conclusion
The current study forward evidence that long-term but not short-term tobacco smoking exerts a substantial impact by increasing extracellular cytokine signaling via IL-17A protein in the peripheral airways, whereas the impact of COPD per se remains uncertain. Moreover, the current study also indicates that this long-term smoking may be associated with a disturbance in the IL-17A-mediated control of antibacterial activity originating from neutrophils. We think that our findings have important implications for the understanding of the pathogenesis of airway inflammation caused by tobacco smoke, including COPD, given that repeated bacterial infections in the airways constitute a hallmark and a negative prognostic factor in these smokers. New studies are required to validate whether the current findings illustrate the mechanistic rationale for the immunological paradox that there are more innate effector cells and, at the same time, more bacterial infections in long-term smokers with COPD. | 2018-01-13T08:21:27.379Z | 2016-09-06T00:00:00.000 | {
"year": 2016,
"sha1": "731ddcb5144d2c0cba1a4b811c1a3648f09260a9",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=32230",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "731ddcb5144d2c0cba1a4b811c1a3648f09260a9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6242068 | pes2o/s2orc | v3-fos-license | Discouse, anaphora and parsing
Discourse Representation Theory, as formulated by Hans Kamp and others, provides a model of inter- and intra-sentential anaphoric dependencies in natural language. In this paper, we present a reformulation of the model which, unlike Kamp's, is specified declaratively. Moreover, it uses the same rule formalism for building both syntactic and semantic structures. The model has been implemented in an extension of PROLOG, and runs on a VAX 11/750 computer.
The idea of separating a computer program into two distinct parts: a logical specification of the problem to be solved, and a proof procedure that "interprets" this specification to actually solve the problem has been a prominent idea in recent work on logle programming, especially in the work of Kowalski. We connect directly into this tradition, in that our specification of DRS theory is provided in the form of an extended Horn-clause logic formalism.
Our system thus consists of two parts: a logical specification of DRS theory, written in a language that we have dubbed PrAtt (for Prolog with Attributes), and a simple theorem prover (interpreter) which is capable of deducing the DRSs that correspond to various input sentences using the logical specification of DRS theory.
In terms of Kowalski's (Kowalski 1979b) famous maxim "Algorithm = Logic + Control", the logical specification of the DRS theory corresponds to the "Logic", while the inference technique used by the inference engine corresponds to the "Control". Currently our inference engine uses a simple top-down proof technique (inherited from Prolog, in which The research reported in this paper was conducted at the The (;enter for the Study of Language and Information, and was made possible in part by a gift from the System Development foundation, we gratefully acknowledge financial support from the National Science Foundation (grant BNS-8309780); and Klein also acknowledges financial support from the U.K. Science and Engineering Research Council (Advanced FeUowship). Earlier versions of this paper were presented at at the Summer Meeting of the Association for Symbolic Logic, July 15-20 1985, Stanford University, and the Autumn Meeting of the Linguistics Association for Great Britain, September 18-20 1985, University of Liverpool. We would like to express our appreciation for comments and suggestions from Jo Calder, Glyn Morrill, Carl Pollard, Fcrnando Percira, John Perry, Ivan Sag, Stuart Shieber, and Henk Zeevat.
Ewan Klein
Centre for Cognitive Science [formerly School of Epistemics] Edinburgh University the inference engine is written), so the system as a whole (= logical specification of DRS theory + top-down theorem prover) functions essentially as a top-down predictive parser. However, this top-down behaviour is a property of the theorem prover only, and one could replace the theorem prover component with a more sophisticated proof technique such as Earley deduction (Pcreira and Warren 1983) resulting in a system that used a generalization of Earley's parsing algorithm. Such a change would be a change to the theoremprover only, since both systems would use the same logical specification of DRT.
A naive model of anaphorlc dependency
In this section we give a brief overview of the basic ideas involved in the model. We do this by presenting a "naive" model provides the core of the analysis developed in section 5, but which ignores the complexity of syntactic structure and quantifier binding. The naive model enables us to exolain our stance, independent of these complicating factors, on matters such as the changing natm'e of the discourse context over time, the mectlanisms used to describe reference, semantle gender agreement, etc.
The following diagram (1) illustrates a naive declarative model of anaphoric dependency, where all that is required to license an anaphoric pronoun is the presence of a possible antecedent to its left.
( In the naive model we conceive of a discourse context as simply consisting of a set of individual names, or reference markers. These represent the entities which are available to be talked about in the discourse, and play a similar role in our framework as the discourse entities of (Webber 1979). In particular, they provide tile set of possible antecedents for anaphorie noun phrases. We make the simplifying assumption that the only way for a reference marker to find its way into tile context is by courtesy of an indefinite description. We assume that reference markers are typed, and we adopt the convention of using 'f' as a marker for female gender entities, 'm' for male gender entities, and 'x' for neuter or indeterminate gender entitie.s. 1 The vertical bars in (I) represent moments of time in the analysis of the discourse; each moment is associated with a discourse context. In this way, we can characterize a developing discourse context as a series of discrete states, each of which is localized at specific point in time and unchanging during the course of the parse. 2 In the diagram above, these I These three types of reference marker correspond to the genders available for pronominal agreement in English. However, it seems plausible that a more complicated account of agreement would be required for those languages (e.g. French and German) in which gender marking is semantically arbitrary. contexts are shown above the bar that corresponds to the point of time at which they hold. Thus, at the beginning of the discourse the context was empty (i.e. the null set), while after the phrase a woman was uttered the context contained the single reference marker f. Consequently, { f } serves as the context for kissed.
We view the meaning of a linguistic expression a as a relation between the context that preeeeds a and the context that follows ~. That is, the meaning of ot has the general form shown in (2): Consequently, in the naive model the discourse context is determined by a series of equations relating the context which immediately precedes a lexical item to the context which immediately follows that item. For individual words, this relation is part of the lexical specification. To illustrate, the semantic contribution of woman is given here by (3a), or more generally, as (3b).
The anaphoric pronoun her behaves in a very different fashion to indefinite noun phrases. Rather than adding a reference marker to the following context, it looks in the preceding context for a reference marker of the right sort (i.e. one that agrees with it in number and gender). If there is no such antecedent marker, the pronoun cannot be interpreted as anaphoric. The meaning of anaphoric her is the relation in (4).
(4) ClherlCifff•C where f is the reference marker associated with her A sequence of discourse contexts is well-formed for a string if all of the relations associated with the lexieal items in the string hold; i.e. the discourse contexts arc a solution to the relational equations. Sometimes these equations wiil have a single solution; in that case, the discourse is unambiguous. However, usually the equations have multiple solutions, which means, in effect, that the discourse has many interpretations. This arises, in the present discussion, when a pronoun has several possible antecedents. 3 On the other hand, it is also possible that the equations have no solution at all. This case arises when a pronoun is used in a discourse context that contains no appropriate reference marker at all.
At a more abstract level, we can view this model as one in which the context is a stream of reference markers, which is threaded from one lexical item to the next. The equations associated with individual iexical items act as (possibly nondeterministic) operators on their input stream to produce an output stream, which serves as the input to the following lexieel item. One of the main virtues of this simple picture is that it invites comparison with other ideas. Our proposed notion of meaning is clearly reminiscent of the claim in (Barwise and Perry 1983) that meaning is a relation between different types of situation, though it also has its roots in earlier work on indexical semantics, such as (Stalnaker 1972). Second, it is also 2 It seems that this technique of factoring a single nonmonotonic representation into a series of monotonic ones is applicable in many areas other than the one discussed here. At an abstract level it is similar to the technique discussed by (Kowalski 1979a). It is also similar to the use of difference lists in logic programming, since the "content" of a particular element is the difference bewteen its "output" and its "input".
s In such a case, our program merely enumerates all possible interpretations, which results in the familiar combinatorial explosion of solutions. A better technique, which we cannot explore here, would be factor out the ambiguity and localize it in the representation.
670 reminiscent of the technique used in logic programming known as difference lists (Pereira 1985) or threading.
Discourse Representation Theory
The naive model presented in the last section ignored all syntactic and lexical interactions with the "left-to-right" nature of anaphoric dependency. The fatal flaw of this account is that is fails to explain the anaphoric propeties of universally quantified NPs. The data which shows this is well known, and some illustrative cases are given in (5) to (7).
(5) a. A woman i went home. She. was tired 1 b. Every woman i went home. She i was tired.
(6) a. Every man i thought he i was ill. b. Lee gave every woman i her t prize.
(7) a. Every man saw a woman t. She i was going home. b. Every woman who klssed a man I loved him 1.
(5) shows that a universal NP does not normally act as an antecedent for pronouns in a following sentence. 4 According to the variable-binding paradigm of anaphora, this follows because a universal can only enter into an anaphorie relation with pronouns that are in its scope. For our current purposes, it is not important whether scope is determined in terms of a tree-geometrical notion like e-command (Reinhart 1983), or in terms of function-argument structure, as proposed by (Ladusaw 1980) and (Bach and Partee 1980); in either case, it is clear that the scope of the universal in (5) is that portion of the first sentence that we have italicised. Examples (6) illustrate cases where a universal does enter into an anaphoric relation with a pronoun in its scope (again indicated by italieisation). (7) is intended to indicate the interaction between indefinites and universals. In (7a), the indefinite has narrower scope than the universal, and it is thereby incapable of acting as an antecedent for a pronoun such as the following she which is outsid the scope of the universal. By contrast, when both the indefinite and the pronoun fall within the scope of a universal, as in (7b), an anaphorie link is permissible. Note that (7b) is a so-called 'donkey' sentence.
The study of these syntactic and lexieal effects has been a central theme of modern theoretical linguistics, but most work within this paradigm has concentrated almost exclusively on intra-sentential anaphora. However, recently (Kamp 1981), (Helm 1982) and (Haik 1984) have developed theories capable of providing a unified account of the main properties of intraand inter-sentential anaphora. We will base our account on Kamp's Discourse Representation Theory, and in this section, we briefly outline those aspects of Kemp's model which are of most relevance to us.
DRT is intended to explicitly capture the distinctions in anaphoric potential exhibited by (ga) and (gb), while simultaneously providing a basis for truth-conditional semantic interpretation. Thus (ga) would be associated with a DRS of the form (8).
(g) f A discourse representation has two parts: a 'universe' consisting of set of discourse markers (in this case a singleton set) and set of conditions. The sentenee A woman went home licences the introduction of the reference marker f into the universe of the DRS, and this marker is also entered as the argument of tile predicate went-home. When She was tired is analyzed, the pronoun can be interpreted as anaphorie on a preceding NPs if the marker licensed by that NP is 'aecessio bit'; i.e. if tile marker belongs to the universe of the immediately enclosing DR or a superordinate one. Since f is accessible, the prouoan her can be identified with it to yield the condition tired(f).
Before turning to sentences involving universal NPs, it will be useful to consider in a little more detail the procedure for constructing a Dlt like (8) proposed by (Kamp 1981/. Karnp's rules pivot on the noun phrases in a sentence, and depend particularly on any determiners in the noun phrases. It is use° ful to think of every determiner as having a semantic restrictor and a semantic scope. The determiner will bind an argument position in each of these. Thus, in a simple intransitive sentence like tlu~ first sentence of (5), the restrictor of a is woman(), while its scope is went home(), where the empty parentheses indicate an open argument position. Given an existing (possibly empty) DRS K, a sentence of the form [[a Res] Scope] is "processed" in the following manner: (i) add a new reference marker x to the universe of K; (ii) fill the argument slot in Res by x, and add the resulting clause to the conditions of K; and (iii) fill the argument slot in Scope by x, and recursively call any applicable construction rules to process the resulting string.
Let us turn now to sentences involving universals. The DR associated with (5b) is illustrated in (9).
(9) f- The universal quantifier every triggers the introduction of two subordinate DRSs, linked by the relation =>; this corresponds roughly to implication in first order logic. When we come to analyze the second sentence of the discourse, She was tired the reference marker licensed by every woman is trapped in the subordinate DRS; it is not accessible at the top level of the discourse. Consequently, the only option is to treat the pronoun she as non-anaphorie, which we have indicated here by associating it with a distinct reference marker.
When
we consider sentence-internal anapbora, the antecedent-introducing potential of every and a converge. For example, in both of the following sentences, he can be anaphoric to the subject NP: (6a) t?,very man i thought he i was ill.
(10) A man i thought he i was ill.
Although it may not be obvious from the examples given so far, DR theory correctly predicts that the reference markers associated with an indefinite or universal NP in subject position will be anaphorically accessible to pronouns that it ccommands. 5 To see why, we need to consider in a little more 5 It might be argued that DR theory fails to provide an adequate semantic distinction between a 'c-command binding' relation and a 'discourse anaphora' relation, as proposed for example by (Rcinhart 1983) in order to account for the strict/sloppy ambiguity in VP ellipsis. Whether this criticism is justified or not depends in large part on the appropriate analysis of such ellipsis phenomena in the DR framework. For some discussion, see (van Eijck 1985), (Klein 1985), (Roberts 1984). detail tile way in which DR's are coustructcd on g~amp's approach.
Construction rules apply to sentences on a top-down, left-too right basis. Given a sentence like (6a) or (10), the first constituent to be processed is tile subject NP. We either stay in the current DR, if tile determiner is a, or 'push down' to an embedded DR if the determiner is every. (This embedded DR is, therefore, the antecedent box of tile conditional like that displayed in (9).) A discourse marker x i is introduced into the universe of whatever is now the current DR, and x i also becomes the argument of the subject nominal (e.g. man(rot)) and the first argument of the predicate VP (e.g. m t thought he was ill). When tile VP is processed, there are again two cases, depending on whether tile subject determiner was a or every. In tile first ,:;ase, we enter tile new conditions licensed by the VP into the current DR. Ill the second case, we close off the current (antecedent) DR, and open a new embedded DR which forms the conseqent box of the conditional. Kamp claims that the reference markers accessible as antecedents to a given pronoun occurrence consist of those reference markers which are present in the universe of either the current DR or of any DR.'s which are superordinate to the current DR. Of two DR's K 1 and K2, K 1 is superordinate to I{ 2 if: (i) K 2 is embedded in K1, or (ii) if K 1 is the anteeedeut of a conditional of which K 2 is tile consequent, or (iii) if there is some K~ such that K 1 is superordiaate to K 3 and K 3 is I;uperordmate to K 2. This is illustrated in (11) diagram below, where tile lightly shaded boxes arc all superordinate to the darkly shaded box.
I
Consider now what follows when we come to process the NP he in either (6a) or (10). It can be anaphorically linked to any reference marker which is accessible to it, and this will of course include the marker x i introduced by the subject NP.
Let us now attempt to summarise the salient features of DRT. Note, first, that every noun phrase is associated with a 'space '6 in a Discourse Representation. Referential termswhich we take to include definite and indefinite descriptions, proper names, and definite pronouns -are entered into an existing space. By contrast, universally quantified NPs induce a new subspace~ Second, the space associated with an NP represents both the quantificational scope of the NP and its anaphoric domain.
Third, the boundaries of these spaces are not coterminous with clause or sentence boundaries. A clause containing universal NPs will induce a number of subspaces; conversely, the space associated with a referential NP can encompass indefinitely many sentences of a given discourse.
Fourth, the space of an indefinite NP which occurs within the scope of a uniw~rsal NP is the same as the space of the universal.
The flow of anaphorlc Information
In the last section we showed how DRT is able to simultaneously describe both the semantics of quantification and tile anaphorie 'range' of referential noun phrases in terms of a single discourse representatkm. The standard version of DRT depends crucially on processing notions in order to explain the failure of anaphora in examples like (12).
He i liked a boy i. Since the reference marker for a boy is not introduced into the DR until after the pronoun he is introduced, it is unavailable as a possible antecedent. That is, the failure of anaphora is explained by assuming that the pronoun's antecedent is assigned at the time at which it is introduced into the DRS, and that the reference marker for the noun phrase is introduced after the pronoun was introduced.
In a declarative framework, an explanation in terms of processing order is impermissible hence we represent left-to-right dependencies by explicit equations. Although these equations are in principle non-directional, it can be helpful to think of them as providing a means for transmitting information from one node in the syntactic structure to another. Bottom-up information flow is central to syntax-drlven compositional semantles of the familiar sort: semantic values are associated with the leaves of the syntax tree, and the semantic value of a complex constituent is determined as a function of the semantic values of the constituents daughters. The diagram in (13) shows this direction of information flow.
NP a'(girl') VP kissed'(a'(boy') )
Det a" N girl" V kissed" NP a'(boy') ) i !, Jsed a gi Det a" N boy" Although this approach has proven to be extremely powerful, it is awkward and intuitively unsatisfactory as a means for dealing with anaphorie dependencies. Even if much semantic information is indeed composed on a bottom-up regime, it seems highly plausible that anaphorie information -that is, information about the set of available antecedents -flows in a left-to-right direction. We have already seen that a simple left-to-rlght model of this information flow can be constructed by regarding meaning as a relation between contexts, but we have also seen that such a model is inadequate for dealing with the facts of bound anaphora. A more satisfactory model can be constructed by reflecting on the principles involved in constructing Discourse Representations. As we pointed out in the previous section, Kamp's construction rules centre on the determiners a and every, since they trigger the introduction of reference markers, the binding of argument positions, and the introduction of sub-spaces. What we shall suggest, therefore, is that information about possible antecedents flows from a determiner to the determiner's restrietor, and from the restrietot to the determiner's scope. The following diagram (14) illustrates how this top-down, left-to-right flow is integrated with the orthodox phrase marker of a girl kissed a boy.
672
The light, incoming lines on the left-hand side of a node indicate incoming information about the set of possible antecedents. This set will be encoded in something we call the "in-list". The light lines on the right-hand side of a node indicate outgoing ~ information about antecedents, encoded in the form of an "out-list". In general, the out-list of any node will be its in-list plus any additional information added by that node. Circled nodes mark constituents that supplement their in-list with new reference markers. The in-list and the out-list together form a difference list, in that the content added by any item is the difference between its in-list and out-list.
Alternatively, one can view the in-lists and the out-lists of nodes as streams along which information about antecedents flows: this anaphoric information is threaded through the syntactic tree structure. Notice that we assume the sentence as a whole will be fed an in-list which is supplied by the preceding discourse. Moreover, the sentence as a whole will also a produce an out-list, which will provide potential antecedents for following discourse.
The next diagram (15) illustrates the flow of information for every girl kissed a boy.
! J
By contrast with (14), the out-list from the VP, containing reference markers for gtrl and boy, is "trapped" at that level rather than percolating up to the S node. The out-list for the sentence as a whole is just the sentence's in-list. This captures the idea from binding theory that the scope of a quantifier is normally limited to its e-command domain (Reinhart 1981, Reinhart 1983; In terms of DRT, it corresponds to the closed subspace that is associated with universal NPs.
Let us summarize our claims so far. We have suggested that there is a contrast between the bottom-up information flow of compositional semantics, on the one hand, and the top-down flow that is naturally associated with anaphoric information.
We have also suggested that top-down flow is largely determined, according to the principles of DRT, by the lexical properties of determiners and their structural position in the sentence.
One possible implementation of this analysis would be to factor out anaphoric, contextual information from the rest of semantics, and to use two distinct mechanisms for building the two kinds of representation. However, such an approach fails to explain why the spaces in a DR, and the list of contextually-divan antecedents always covary; that is, when a new DR subspace is opened, a new context list begins, and when a DR subspace is closed, a context list is simply "dropped", ie. it does not serve as the in-list to any other expression. Indeed, the fact that a DRS in Kamp's theory consists of a universe, corresponding to our context list, and a set of conditions, corresponding roughly to compositional semantic information, suggests that it out to be possible to enrich the notion of a context from being just a list of antecedents to being a whole DR structure.
In our analysis, then, we thread a list through the syntactic structure which contains both conventional semantic information and information about available antecedents, so that an expression mapping an incoming context into an outgoing context does more than incrementing the set of possible antecedents: it also adds conditions to the context that correspond to its truth-conditional semantics.
It is necessary that the context be structured, rather than a simple list, as it was in the naive model, and as discussed above. This is because we need to be able to incorporate the semantic structures associated with all expressions, even those that are anaphorically opaque to following anaphora. In the model described immediately above, we accounted for the anaphoric opacity of an expression by "dropping" its context list after it had been processed, but such "dropping" in a system where the context lists also contain "compositional" semantic information would result in that semantic information also being lost.
Rather, we structure the context list as an ordered list of the currently open DR spaces, starting at the most embedded space, and working upward through the superordinate spaces. For example, the context list for an item located in DR space K 1 in (11) would be [ K., K~, K~, K-], where each K. is a 1 z 4 1 set of reference markers and eon~tions, the current contents of the corresponding space. The first space on the context list is the most embedded space, ie. the current space, and identities the place where new conditions and reference markers are to be added. Since the context list consists of the active space plus all of the spaces snperordinate to it, any reference markers contained in these spaces are possible antecedents for anaphora in the active space.
The Grammar
We turn now to considering the induction of DRSs. In this section we describe a simplified version of ttle grammar that we have implemented. The grammar presented here is the actual input to the proof procedure: the parser is nothing more than a declarative statement of the well-formedness conditions of an utterance, plus a proof procedure capable of determining whether or not these conditions actually hold of a given utterance.
The rules are written in DCG format (Clocksin and Mellish 1984) in a superset of Prolog that we developed in this project. This language, which we have dubbed PrAtt (for Prolog with Attributes), allows an attribute-value notation as well as the standard position-value notation of Prolog. For example, the expression "N:syn:index" refers to the value of the Index attribute of the syn attribute of the variable N.
We make heavy use of the attribute-value notation to represent feature bundles associated with constituents. Two attributes that are present on every constituent are syn (for "syntax") and sam (for "semantics"). The sam:in and sam:out attributes contain the context in-lists and out-lists respectively, while the syn attribute holds information used to construct the function-argument structure of the clause.
Expressions act on the context list by opening or closing spaces (ie. pushing or poping spaces from the context list), adding reference markers and conditions to the active space, and looking through all of the spaces in the context list for antecedents for anaphora.
Consider, for example, the common noun woman. It inserts a reference marker f and a condition woman(f) into the active space. Using our earlier relational notation, we can express its meaning as follows: 7 In our implementation, this would be written as in (17). 7 We use standard Proiog notation here: variables begin with a capital letter, constants with a lower-case letter, "ix,y]" is the list that contains x and y, and "[x~]" is the list that consists of x CONScd onto y.
Tile hracketted equations are conditions that must be satisfied in rewriting an N to the lcxical item woman. The first equation assigns a reference marker to the lexieal item, s the second equation analyses the incoming context list into two parts, the current space (Current) and a list of the superordinate spaces (Super), while the third equation requires the active space of the outgoing context list to contain the referenco marker and the condition associated with the noun.
A sample entry for a verb is shown in (18). Again, the equations associated with the lexieal entry dissect the incoming context into the current space and a list of superordinate spaces, and place the condition associated with the verb into the outgoing context. One interesting property of this rule is that it is responsible for placing a condition into the context that in essence represents the compositional semantics of the entire clause. The ~yn attributes of constituents are used to councct the NP arguments of the verb with the verb itself; thus the necessary information to build tile condition associated with the entire clause is available at the verb. One can view the equations in the phrase structure rules associated with the syn attribute as directing information from the NP arguments inward and downward to the verb. As we shall see later, the phrase structure rules are written in such a way that the value of the elauses's sam attribute is equal to its subject's determiner's sam attribute, and the semantics attribute of the restrictor and the scope of a clause are placed in that determiner's sem:res and sam:scope attri. butes respectively. As noted earlier, an indefinite determiner does not cause the creation of any additional subspaces, rather the restrietor and the scope are simply placed into the current active space. Therefore, the equations associated with the indefinite determiner simply connect the in-list asssociated with the sentence to the restrictor's in-list, feed the restrietor's out-list to the scope's in-list, and take the out-list from the scope as the out-list for the clause as a whole.
The lexical entry associated with the universal quantifier every is a little more complicated. It must create two new spaces, one for the restrictor, the other for the scope, and the finally close off both spaces, and huild the structure associated with the clause as a whole. S For simplicity here we have reference markers directly to lexieal entries; however more correctly the reference markers should be assigned to lexieal tokens, allowing two occurances of the same lexical entry to refer to different objects in the world.
The first equation in (20) pushes a new, empty space onto the determiner's in-list as the active space, and makes that list the restrictor's in-list. The second equation takes the restrictor's out-list pushes another new, empty space onto it, and makes the resulting list the scope's in-llst. The final equation takes the scope's out-list, removes the two spaces that were added for the restrietor and the scope, and produces a new list in which the original active space has a complex condition added to it representing the whole universally quantified expression. This last list serves as the outqist for the determiner, and hence for the clause as a whole.
Below are the phrase structure rules responsible for connecting the various attributes of the constituents as described above. It remains only to give the lexical entry associated with pronouns, and our fragment is complete. This is given in (24). The first three equations require that there be some space containing a reference marker of feminine type with which the pronoun's reference marker can unify: 9 the last two equations take account of the fact that an anaphoric pronoun, while not adding any conditions of its own to the context, can appear in subject position, and thus can have a scope expression.
We have now completely described our declarative formulation of DRS theory. This formulation (together with phrase structure rules that analyse a discourse as a series of sentences) suffices to obtain the analyses shown below) ° w,woman(w)l 9 The definition of member used here is the conventional one used in Prolog (albeit interpreted by the PrAtt interpreter, while the type predicate is a set of clauses of the form type(w,feminlne)., etc.
l0 Note that because later elements are pushed onto the front of a DR space, the order of the elements in the DR spaces is the reverse of their "normal" pr.~'~entation. This does not affect their truth conditional semantics, however.
674
We have also implemented a more complex version of this grammar incorporating a treatment of unbounded dependencies, and obtained analyses like the following: Tile parser indicates ill-formedness of its input in the standard Prolog fashion, viz. it fails to find a well-formed DRS for the input sentence.
(28) A woman who loves every man kissed him. no
Conclusion
The declarative reformulation of DRS theory proposed here is relatively faithful to Kamp's original formulation, but has the advantage that it inherits a fully specified declarative and procednral semantics from the underlying Prolog system. It emphasises tlle view that expressions of the language can be viewed as relations between preceding and following contexts, and shows how these relations can be specified in a formally precise way.
This model opens up several important questions. Kamp showed that the treatment of anaphorie dependencies, normally viewed as a left to right phenomenon, can be integrated with the treatment of the "conventional" truth-conditional semantics of clauses: we have shown that both of these can be integrated into an extended unification-based model of grammar. This integration allows one to be precise about the nature of the syntax/semantics/discourse interface(s), and also allows experimentation with respect to the analysis of specific linguistic phenomena. For example, in our larger grammar (not presented here) we capture strong and weak cross-over phenomena by introducing the reference marker associated with a relative clause NP when the corresponding gap is reached. We are thus analysing what is usually thought of as a syntactic phenomenon in terms of the accessibility of reference markers, a discourse property.
From a computational point of view, there is a delicate interaction between the specific rules adopted in declarative formulation of the theory and the "power" of the inference procedure needed to determine the well-formedness of a partitular utterance with respect to them. The top-down left-toright inference procedure inherited from Prolog suffices for the grammar presented here, but one can easily write grammars in PrAtt for which this inference procedure may fail to terminate. We are investigating other inference procedures, such as Earley Deduction (Pereira and Warren 1983) and Left Corner parsing to see if they have better termination properties. Essentially, the problem is one of arranging the equations in the grammar to be applied in an order such that the search space is finite: thus research on various coroutining strategies, such as the use of the freeze predicate is relevant here. | 2014-07-01T00:00:00.000Z | 1986-08-25T00:00:00.000 | {
"year": 1986,
"sha1": "6c01844baa06d087f8f60b64122a2195628e2ded",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=991559&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "6c01844baa06d087f8f60b64122a2195628e2ded",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245216494 | pes2o/s2orc | v3-fos-license | Wastewater recycling and the possibility of its technical use in practice
In real practice, almost half of the water, after its technical or technological use in various degrees of pollution, is drained by sewerage systems to wastewater treatment plants and subsequently discharged into the recipients of rivers and streams. The current and especially the future method of urban and industrial wastewater treatment is at such a high level that the treated water, before its discharge into the recipients, has a higher degree of quality than the flowing surface water in the watercourse. Under these ever-improving conditions and possibilities, it is appropriate to use well-treated wastewater not only for the needs of agriculture, but also as an alternative supply of fire water for fire purposes. The dislocation of wastewater treatment plants (hereinafter WWTP) in territorial cadastres with safe access to the opened level of treated water allows its relatively rapid pumping at any time, especially in conditions where there is no other suitable natural or multipurpose source of fire water. The following article suggests in a basic way how to use the given options without the risk that the treated wastewater will not endanger the health of fire brigades or will not have the negative impact on the environment of the extinguished building and its surroundings in which the fire is extinguished.
Introduction
Water is one of the most important natural resources, which ensures the survival of animal and plant species, including humans. Fresh water consumption and demand have been rising over the years as a result of population growth, the intensification of agriculture and industry. At least until the end of the 21 st century can not be expected turnaround in the current global management of fresh water. The statement, nor other scientific forecasts do not give too much optimism in this area. One of the few truly realistic and quickly feasible ways to change this negative trend is the use of well-treated wastewater, which is an integral part of the consumption of the human population in the current way of dealing with drinking water. An innovative technique for protecting aquatic ecosystems is the reuse of gray water, rainwater harvesting, seawater desalination and groundwater extraction, etc., especially in the face of climate change and climate variability are central in minimizing water scarcity. Reusing gray water as an alternative and efficient source of water can help reduce the pressure on fresh water. The level of acceptance is due to limited information or awareness programs. However, the strategy of water reuse and the motivation to use it is increasing. New possible ways need to be found to support the strategy of reusing recycled water while increasing environmental safety [1,18] One of the limiting current factors of using the wastewater for subsequent implementation for a number of options is the European and subsequently applied legislation of the Czech Republic in water law.
Requirements for the quality of raw water in the conditions of the Czech Republic
Indicators of water quality taken from surface water sources or groundwater sources for the purpose of treatment for drinking water and their limit values sets the Decree of the Ministry of Agriculture No. 428/2001 Coll., Which implements Act No. 274/2001 Coll., On water supply and sewerage for public use. According to the decree, raw surface or groundwater must meet the limit categories A1, A2, A3. Despite this statement, it will be necessary in the coming years, at least locally, to look for another way to obtain water for water systems. It is necessary to realize that water supply systems for public use are not only a supplier of drinking water to consumers, but at the same time, a multi-purpose source of fire water for built-up and undeveloped areas of the state. For this reason, a number of institutions, including students of technical universities, are beginning to deal with this issue and ways of solving it. In the following text of the paper, one of the outputs of this research and the search for new ways to increase the supply of water for fire use will be presented for the professional and general public. In the years 2017 to 2020, a study was carried out on the possible use of treated wastewater in the fire brigade of the Czech Republic. From time immemorial, wastewater has been considered unnecessary waste and is not suitable for further use. Historically, this approach has evolved through the removal of wastewater outside the inhabited area, to its concentration in one place -the WWTP, where this wastewater is safely treated and disposed of by discharge into the recipient [19]. Very demanding permits for the discharge of treated wastewater into recipients always stipulate that the quality of water discharged into the recipient must be at a higher level than the water in the recipient itself. Through the passage of contaminated water through the WWTP technology, its qualitative criteria are monitored at the outflow and must meet the criteria for discharged water in a permit issued by the relevant water authority. Physical (temperature, density), chemical (occurrence of individual elements) and biological (number of organisms, animals) indicators determine the quality of water for a given purpose of use and comparison of waters. Water quality is thus a set of properties (organoleptic, biological, chemical, etc.) of water, which can be expressed in layman's terms as water purity. For the purposes of this study, tests were performed in an accredited laboratory, which was Laboratory of the Olomouc Health Institute [2].
The course of the research study
In the first phase of the research study, physical and chemical indicators were monitored. Sampling from fire trucks was performed, as well as sampling from the Bečva River and a sample of wastewater from the WWTP. The samples taken were tested for qualitative indicators of COD, BOD, insoluble substances. Furthermore, this analysis was extended to ammoniacal nitrogen. The water samples were stored for two months in an environment simulating the environment of the firebrigade garage. The samples were stored in glass bottles or plastic sample boxes at a temperature of 18.5 °C without access to sunlight. They were then re-analyzed for COD, BOD, NL, N-NH4. The result of the analysis shows Table 1. Due to the sampling in the winter, when the outdoor temperature was around the freezing point, the sample from the river showed a qualitatively better result. However, these parameters do not have a significant effect on the decision-making process as to whether or not treated wastewater can be used. The next step, with a more pronounced effect on the intervening firefighter, was to verify the microbiological safety, where in general bacteria or viruses causing serious diseases such as typhus, infectious hepatitis or diarrheal diseases of viral origin can be transmitted by water. The analysis of these diseases would be technically, temporally and financially unbearable. The method of so-called indicators of faecal pollution is applied worldwide, in which bacteria living in the intestinal tract of humans and warm-blooded animals such as E. coli bacteria, coliform bacteria or enterococci are sought. If any of these bacteria are found in the water, the water is suspected of contact with feces or animal remains, so there is a presumption that it could contain pathogenic bacteria and viruses, which most often come from the intestinal tract. In addition to indicators of faecal contamination, so-called general contamination indicators (number of colonies growing at 22 °C or 36 °C, formerly so-called psychrophilic and mesophilic bacteria) are also used, which are of less hygienic importance than the previous one [3]. From the point of view of the protection of the intervening firefighter, it is necessary to look at this issue also from the angle of the possible health threat when using this water. Purified water can contain viruses, parasites or bacteria capable of attacking the human body and causing subsequent diseases. The view of the firefighter in this case can be two-sided, namely, the practical side and the legislative side. Fire brigades routinely use water from natural sources, and none of the firefighters or fire commanders think about the quality or safety of this water. At the same time, the fact that water is transported under pressure to the place of combustion also plays an important role here. Effective jet spraying ranges from 9 meters to about 29 meters depending on the type of jet used [4]. Also, the exposure time is relatively short. When using this water without respiratory protection (eg liquidation of hidden foci in a forest fire), combustion products swirling using a water jet are much bigger threat than droplets or mist of this water [2]. We can also look at this issue on the basis of legislative regulations. Those that are closest to the use or handling of this water were selected, eg Act 238/2011 (bathing in the wild), NV 428/2001 (requirement for raw water quality) or also ČSN 75 7143 (water for irrigation) during emergency activities [5,6,7]. Subsequently, this procedure was consulted with the Olomouc Health Institute, when the requirements of the standards were extended by other items. This follow-up study was similar to the previous study. Again, a water sample was taken from the WWTP, this time in the amount of 1000 liters with storage in the conditions of the exit garage of fire equipment with the prevention of access to light. An up-to-date analysis of the abstracted water was performed in the drinking water laboratory in Přerov and also an extended analysis in the health institute in Olomouc. The drinking water laboratory performed the analysis in the range of: coliform bacteria, E. coli, thermotolerant coliform bacteria Figure 1, partial enterococci, clostridium perfringens. [16]. Laboratory of the Olomouc Health Institute, in addition to the norms and the law, monitored the occurrence of Pseudomonas aeruginosa, Legionela spp, Salmonella Figure 2. The analyzes were performed from April onwards at an interval of two weeks. One month after the analysis in the interval of 3 months. In August, water was taken and subsequently analyzed from natural sources, specifically from the Bečva River and lake Laguna. The month of August was chosen due to the least favorable conditions in the quality of natural waters (long-lasting high temperatures). Due to the full objectivity of the performed measurement, samples from two fire tanks were added in November, namely the type of open fire tank and closed fire tank. The individual samples were then assessed mutually, see Table 2. Figure 2. Salomella sp. inoculated on Rambach agar [9].
Figure 1. Coliform bacteria and Enterococci
The measurements show that the water treated from the WWTP is significantly better in quality than water from natural sources. This fact applies to both cases of "freshly" pumped and long-term stored water from the WWTP. There is another possibility of using this water in storage. It is appropriate to consider the use of certain reservoirs with this water at fire stations to continuously replenish the capacity of tank cars.
Health risks of water-borne diseases
When assessing the health risk associated with workers 'exposure to biological agents, the nature, extent and duration of exposure must be determined so that all risks to workers' health can be assessed and the necessary measures taken to protect their health. In the Czech Republic, a system of work categorization has been introduced in the hygiene service, i.e. also for work in waste management. The system divides work into four categories according to risk [13]. Biological factors are classified according to the degree of risk of infection into four groups (Section 22 of Government Decree No. 361/2007 Coll. [12], as amended, which lays down the conditions for protecting the health of employees at work). The obligations and responsibilities of the employer to ensure safety and health at work are enshrined in the Labor Code. The main goal of ensuring safety and health at work is to analyze, evaluate and reduce risks to the lives and health of employees at work. The provisions of the Labor Code on safety and health at work are followed by implementing and other related regulations [14].
With the use of specified protective equipment, handling the wastewater should not cause health complications for those affected, apart from the possibility of a sensitive individual and compliance with the principles of hygiene. Contact with water itself is generally minimized and exposure is short.
In case of reduced protection, eg when extinguishing forest stands, a respirator can be used for respiratory protection, eg FFP 2. However, if the intervening firefighter does not use respiratory protection, the risk of infection by inhaling the aerosol of sprayed water is very low, if we consider the spray distance and health resistence of the firefighter (apart from the fact of the current indisposition). The possibility of protection is also offered by the use of products such as Chloramine, Savo or Persteril. However, the use of these products can have a negative impact on the environment. The results of the tested samples of treated wastewater also correspond to the values given in the standard ČSN 75 7221, which concerns the classification of surface water quality [16] and ČSN 75 7143 -water for irrigation [7].
Conclusion
The research studies are aimed at refuting the negative view and suggest the use of treated wastewater for large-scale emergencies, especially industrial or agricultural facilities, where the use of drinking water from the hydrant network can solve the supply of fire water to the population in times of crisis. Chemical and biological analyzes have dispelled possible unfounded concerns about the use of this water. The WWTP is an ideal place for the establishment of pumping stations for fire brigades and also an optimal base for the distribution of fire water to the burning site with secured occipe and technical equipment. From the point of view of fire technology, there are no obstacles to pumping this water. It is clearly recommended to use purified water from the WWTP. Especially for regions that regularly suffer from water shortages due to long-term drought, WWTPs are a optional source of fire water (forest fire fighting). The aim is to draw attention to the hidden potential of this alternative and underused water source in the form of a WWTP. The tests performed were at the regional level at one WWTP. For deeper implementation of the use of this resource, it is necessary to carry out further testing at the national level. Equally important is the legislative enshrinement of the use of purified water from the WWTP. Signing up for one of the environmental statements, either in the ISO14001 or EMAS concept, would be completely positive on the part of the management of the Fire and Rescue Service of the Czech Republic. | 2021-12-16T20:07:09.938Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "84b626f0f48f27ba3beeec491e760382ae8a6461",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/900/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "84b626f0f48f27ba3beeec491e760382ae8a6461",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
3769580 | pes2o/s2orc | v3-fos-license | Chinese Grammatical Error Diagnosis Using Ensemble Learning
,
Introduction
Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers for a long time, mostly due to the flexible and irregular ways in the expressing of this language. Different from English which follows grammatical rules strictly (i.e. subject-verb agreement, or strict tenses and modals), the Chinese language has no verb tenses or numbers and endures heavily for the incompleteness of grammatical elements in a sentence (i.e. the zero subject or verb or object). Some examples are shown below in Table 1.
他們很高興。
They are very happy Table 1. Some typical examples for special grammatical usage in Chinese.
In the above table, the first sentence contains no verb elements in the Chinese version. In the Chinese language, the adjectives will not cooccur with copulas in many cases. So if we add a be (是) into the sentence (四月/是/最/熱), it will be grammatically incorrect. In the second sentence, the conjunction 就 has nothing to do with the meaning of the whole sentence, but it is a necessary grammatical component when collocate with the word 一 to express the meaning of as soon as. The adverb 很 is an essential element for the third sentence which corresponds to the word very in the English version. However, we can simply remove very but cannot remove 很 due to some implicit grammatical rules. Overall, the expression of the Chinese language is flexible and the grammar of Chinese is complicated and sometimes hard to summarize, so that it is very difficult for foreign language learners to learn Chinese as the second language.
The CFL14 and 15 shared tasks provide a platform for learners and researchers to observe various cases of grammatical errors and think deep-er about the intrinsic of these errors. The goal of the shared task is to develop computer-assisted tools to help detect four types of grammatical errors in the written Chinese. The error types include Missing, Redundant, Disorder and Selection. And in last years shared task, several groups submitted their report, employing different supervised learning methods in which some groups obtained good results in detection and classification . Similar to the last year's task, we put our emphasis mostly on the error detection level and error type identification level but did little for the position level although this year's task includes the evaluation on this level.
In this paper, we use supervised learning methods to solve the error detection and identification sub tasks. Different from most of previous work, we didn't use any external language materials except for the dataset for the year 2014's shared task. What we adopt include feature extraction, data construction and ensemble learning. We also report some of our observations towards the errors and summarize some conceivable rules, which might be useful for future developers. At last, we analyze the limitation of our work and propose several directions for improvement.
The following of this paper is organized as: Section 2 briefly introduces the literature in this community. Section 3 shows some observations towards the data provided. Section 4 introduces the feature extraction and learning methods we used for the shared task. Section 5 includes experiments and result analysis. And future work and conclusion are arranged at last.
Related Work
In the community of grammatical error correction, more work focused on the language of English such as those researches during the CoNLL2013 and 2014 shared tasks (Ng et al., 2013;Ng et al., 2014). A number of English language materials and annotated corpus can be used such that the research on this language went deeper. However, the resource for Chinese is far from enough, and very few previous works are related to Chinese grammatical error correction. Typical ones are the CFL 2014 shared task ant the task held in this year. Following, we briefly introduce some previous work related to Chinese grammatical error diagnosis.
Wu et al. proposed two types of language models to detect the error types of word order, omission and redundant, corresponding to three of the types in the shared task. Chang et al. (2012) proposed a probabilistic first-order inductive learning algorithm for error classification and outperformed some basic classifiers. introduced a sentence level judgment system which integrated several predefined rules and N-gram based statistical features. Cheng et al. (2014) shown several methods including CRF and SVM, together with frequency learning from a large N-gram corpus, to detect and correct word ordering errors.
In the last year's shared task, there are also some novel ideas and results for the error diagnosis. 's work included manually constructed rules and rules that automatically generated, the latter of which are something like frequent patterns from the training corpus. Zhao et al. (2014)'s employed a parallel corpus from the web, which is a language exchange website called Lang-8, and used this corpus to training a statistical machine translator. Zampieri and Tan (2014) used a journalistic corpus as the reference corpus and took advantage of the frequent N-grams to detect the errors in the data provided by the shared task. NTOU's submission for the shared task was a traditional supervised one, which extracted word N-grams and POS Ngrams as features and trained using SVM (Lin et al., 2014). In their work, they also employed a reference corpus as the source of N-gram frequencies.
Our submission was similar to NTOU's work whereas we didn't use any large scale textural corpus as references. Our target was to see to what extent can the supervised learner learn only from the limited resource and what types of classifiers perform better in this task.
Data Analysis
We show some of our observations towards the training data in this section. What we observed are some frequent cases among the error types Missing and Redundant.
For the error type Missing, we noticed that errors often occur in some certain cases. For example, the auxiliary word 的 (of/'s) accounts for 11.35% in all the Missing sentences (and 7.93% sentences contain 的 in the training data are incorrect). One of the most frequent missing cases is the missing between an adjective (~est for short) and a noun. For instance, 最好(的) 電影院 (the best cinema), 附近(的) 飯店(a near restaurant), and 我(的)日常生活(my daily life). From the English translation we see that there is no 's or of in the phrase such as the girl's dress (女孩 的衣服) or a friend of mine (我的一個朋友), but in the grammar of Chinese, a 的 is inserted due to the incompleteness of the expressions.
For the error type Redundant, the word 了 (an auxiliary word related to a perfect tense) accounts for 10.88% in all the Redundant sentences (and 21.78% sentences contain 了 are incorrect).
The word is redundant when the sentence contains nothing related to a perfect tense. For instance, 我第一次去(了) 英國留學。 (I studied abroad in Britain for the first time.) and 當時他不 老(了)。(He wasn't old at that time.). So we can judge whether the word is redundant according to the tense of the sentence.
Words that are grammatical incorrect are almost function words, which behave differently in the grammars for Chinese and English (or other languages). Typical examples are 是 (is), 都 (auxiliary), 有 (be), 會(will), 在 (in/at), 要(will), etc. However, we didn't do much towards specific words in our research but only recognize there should be some frequent rules that we can follow. And we will further discuss some proposals later.
Supervised Learning
In this work, neither did we use any external corpora except for the dataset for the year 2014's shared task, nor are any language specific heuristic rules or frequent patterns included. We were going to see what kind of features and what type of supervised learners can benefit this problem most. As declared previously, we did little for the position level extraction, so we introduce mostly on feature extraction, model selection and the construction of the training data.
Feature Extraction
For this task, we tried several kinds of features such words, POS (part-of-speech), as well as syntactic parse trees and dependency trees. Finally, we find that POS Tri-gram features perform stably and generate the best results. Therefore, we define the POS Tri-gram for sentential classification at first.
For each word in a sentence, we extract the following triple as the Tri-gram for this word: <POS-1, POS, POS+1>. And for the beginning and the ending of a sentence, we add two indicators to make up the column vectors. For example, in the sentence這/一天/很/有意思。(This day is very interesting.), the sentence-level POS fea-tures are (r, m, zg, l) the features for the word這 (This) are <start, r, m> 1 .
In addition, we extract the relative frequency (probability) for each triple based on the CLP 14 and 15 dataset as P(<POS-1, POS, POS+1>). In the experiment, we noticed that the frequency features are also good indicators to detect candidates for grammatical errors.
To summarize, we extract two types of POS Tri-gram features: the binary Tri-gram and the probabilistic Tri-gram. The binary Tri-gram demands that if the sentence contains this Tri-gram (i.e. <start, r, m>), the corresponding position in the gram vector (the union set of all possible Trigrams after removing those with very low frequencies) is set to be 1. For probabilistic Trigram, the position is set to be the relative frequency (the proportion for the Tri-gram).
Supervised Learning
After feature extraction, we put the features into several supervised learners. We use a series of single classifiers such as Naïve Bayes (NB), Decision Tree (DT), Support Vector Machines (SVM) and Maximum Entropy (ME), and ensemble learners Adaboost (AB), Random Forest (RF) and Random Feature Subspace (RFS). RF is an ensemble of several DTs, each of which samples training instances with replacement and samples features without replacement. RFS is an ensemble classifier based on feature sampling which takes results trained on different feature subspace as majority voters. The classifiers are from Weka (Hall et al., 2009).
We take those training sentences with annotated errors as positive instances and subsample the correct sentences as negative ones. Through tuning towards the proportion of negative instances, we discovered that the number of negative instances also affected the final results.
Experiment and Analysis
In the experiment, we use the training data from this year's and last year's shared tasks. Table 2 lists the number of sentences for each type in the training data. Since the scale of this year's data is really small, we add last year's corpus into the training data and do cross validations in the training steps. Table 2 lists the number of sentences for each error type in these two years' dataset.
Our experiments cover training data construction, feature selection and supervised learning. Table 2. Error type distribution for the two years' shared tasks.
Detection Level Identification Level Accuracy Precision
We tried several groups of training data, different combinations of features and a variety of classifiers in the training phase.
Training Data Construction
As mentioned previously, the sentences that contain no grammatical errors behave as the negative instances for training. To avoid imbalance between the positive and negative instances, negative ones were randomly selected to construct the training set. At last, we divided the training data into 8 parts and used 8-fold cross validation (CV) for the classifiers. We found that, when we selected 4000 negative instances, the system achieved the best results.
Feature Selection
As mentioned in §4.1, we investigate the features POS Tri-gram and POS Tri-gram + POS Trigram probability. We report the CV results generated by four single classifiers and three ensemble classifiers in Table 3 and Table 4 for the two set of features, respectively. The results have been optimized through tuning the parameter settings for each classifier.
From the results, we find that the ensemble classifiers generally perform better than the single ones, and AB achieves the best results for detection and identification.
Final Results
Among the three runs of results we submitted, the first run is the best. We show the results in Table 5 and compare them with the CV results. This submission is generated by the ensemble classifier RFS by using POS Tri-gram and probability features. We see that the performance of the identification level greatly falls behind that in the cross validation. One of the possible reasons for this gap, we consider is the setting of instances, which may be quite distinct between the training and the testing data. And another possible reason is the reasonability of the probability features.
Analysis
Compare the results generated by the two feature sets (Table 1), it can be seen that the second fea- Figure 1. Accuracy of the four error types and the correct type on four classifiers that perform best. ture set outperforms the first, on both the detection level and the identification level. To some extent, it indicates that the patterns for the grammatical phrases may frequently occur in the datasets.
Further, we pick up the last four classifiers which perform relatively better on the task data, including DT and three ensemble classifiers, and do statistical analysis on the true positive rates during cross validation (Figure 1). The results reveal that the difficulty on judging decreases from Redundant, Missing to Disorder and Selection. In addition, the accuracy for the correct label is not quite high, leading to a number of false negative sentences.
Through observation, we found several cases might affect the predicting results. A typical case is that a grammatically wrong sentence can be corrected through several ways, corresponding to more than one error types. For example, the sentence 他 馬 上 準 備 上 學 (He is preparing for school.) can be classified to any of the four types: 他馬上要準備上學(了) Missing Table 6. Example on multiple ways for correction.
All the four directions are reasonable but the dataset only provide the third one. Therefore, these data may create confusion for classification and should be considered in the future work. In addition, some annotation maybe not so cleat, for instance in the sentence 但是這幾天我發現(到)你 有一些生活上不好的習慣 (But these days I noticed some bad habits on you in your daily life). The given annotation is selection, but we think redundant is much more reasonable.
Future Work
According to the observations towards the training data, we think the following direct proposal is learning from the position level, just as the shared task demands. On this level, we can extract more pointed features, integrating both syntactic and semantic ones. Besides, for the sentential level classification, the deep neural network based methods (i.e. Convolutional Neural Networks) are expected, with traditional features or embeddings, to detect more structured rules. In addition, we deem that dependency tree features may be useful and should be further developed. And improvement may also be achieved by mining the confusion in annotation (i.e. the difference between selection and redundant).
Conclusion
In this paper, we introduce the ensemble learning based method used in the CFL shared task for Chinese grammatical error diagnosis. We report some of our observations towards the training data, features and learners we used in our experiments. Different from most previous work, we didn't use any other external language corpus for reference and we didn't use any rules either. The results show that the ensemble methods perform better than the single classifiers based on our simple features. From the results, we see space for further development. | 2015-08-11T20:29:18.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "48d44601258aa75494a674ac5393d27ef859f5d6",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W15-4415.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "48d44601258aa75494a674ac5393d27ef859f5d6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221749327 | pes2o/s2orc | v3-fos-license | Active Shooter Drills: A Closer Look at Next Steps
The novel coronavirus has rightfully triggered a public health emergency; however, it has also diverted the attention of the nation away from the enduring crisis of gun violence, especially gun violence in our nation’s schools. Because firearms are the second leading cause of death for adolescents [1] and gun access is strongly correlated with unintentional and intentional injury [2], we must continue to focus on the health of our students and schools. Schools serve as a place for social development [3], and acts of violence often occur at school [4], adversely influencing healthy development. For example, in school shootings alone in the United States between 1996 through 2019, there have been approximately 196 students and 30 school staff killed and another 243 wounded [5]. These data suggest the damage inflicted by school shootings extend far beyond students. Although the likelihood of a school shooting in any given school district remains statistically low, in the absence of strong legislation, schools have taken action by implementing active shooter drills. For instance, 95% of American public schools had active shooter drills in place during the 2015e2016 school year [6]. These drills are often referred to as “lockdown drills” and involve confining students to specific areas with additional instructions in the event of an incident. However, as noted by Moore-Petinak et al., in this edition of the Journal of Adolescent Health [7], little research has attempted to include the perceptions of students impacted by these active shooter drills. Moore-Petinak et al. [7] conducted a qualitative study analyzing adolescents’ perceptions of gun violence and active shooter drills. These researchers concluded that very few youth reported receiving evidence-based active shooter drill training and that the drills they did receive caused emotional distress where about 60% reported feeling unsafe, scared, helpless, or sad as a result of experiencing active shooter drills. Moreover, although about 58% of youth reported that active shooter drills teach them what to do if such a situation presents itself, they were uncertain of their ultimate benefit. Their research also identified fidelity challenges in implementing evidence-based active shooter drill strategies. If schools are implementing only parts of evidence-based active strategies, this alone may contribute to part of the emotional distress that Moore-Petinak et al.’s study [7] sample of students
Editorial
Active Shooter Drills: A Closer Look at Next Steps The novel coronavirus has rightfully triggered a public health emergency; however, it has also diverted the attention of the nation away from the enduring crisis of gun violence, especially gun violence in our nation's schools. Because firearms are the second leading cause of death for adolescents [1] and gun access is strongly correlated with unintentional and intentional injury [2], we must continue to focus on the health of our students and schools. Schools serve as a place for social development [3], and acts of violence often occur at school [4], adversely influencing healthy development. For example, in school shootings alone in the United States between 1996 through 2019, there have been approximately 196 students and 30 school staff killed and another 243 wounded [5]. These data suggest the damage inflicted by school shootings extend far beyond students.
Although the likelihood of a school shooting in any given school district remains statistically low, in the absence of strong legislation, schools have taken action by implementing active shooter drills. For instance, 95% of American public schools had active shooter drills in place during the 2015e2016 school year [6]. These drills are often referred to as "lockdown drills" and involve confining students to specific areas with additional instructions in the event of an incident.
However, as noted by Moore-Petinak et al., in this edition of the Journal of Adolescent Health [7], little research has attempted to include the perceptions of students impacted by these active shooter drills. Moore-Petinak et al. [7] conducted a qualitative study analyzing adolescents' perceptions of gun violence and active shooter drills. These researchers concluded that very few youth reported receiving evidence-based active shooter drill training and that the drills they did receive caused emotional distress where about 60% reported feeling unsafe, scared, helpless, or sad as a result of experiencing active shooter drills. Moreover, although about 58% of youth reported that active shooter drills teach them what to do if such a situation presents itself, they were uncertain of their ultimate benefit.
Their research also identified fidelity challenges in implementing evidence-based active shooter drill strategies. If schools are implementing only parts of evidence-based active strategies, this alone may contribute to part of the emotional distress that Moore-Petinak et al.'s study [7] sample of students is reporting. This is particularly unsettling with the finding that students reported such drills may actually inform potential shooters of the actions schools may take in a given situation. This perception is not unfounded as The Violence Project has determined from a review of mass shooters from 1966 to 2019 that (1) nearly all were current or former students and (2) these students displayed warning signs before the incident [8]. Combined, these findings present multiple opportunities for school personnel and those concerned with the well-being of youth to develop more comprehensive approaches to complement active shooter drills.
At the individual level, we know from the general strain theory [9] that females are more likely to respond to strains such as peer rejection, being bullied, or a victim of gossip with depression or self-blame, whereas males are more likely to respond with anger and to blame others, which may lead to externalizing behaviors such as violent crime [10]. This may explain why 96% (50 out 52) of active school shooters in the United States from 2000 to 2017 were male [6], suggesting male gender is a strong, independent risk factor for harming others. Therefore, monitoring patterns of behavior and responding quickly to students experiencing strain may be useful in identifying vulnerable students early and connecting them with appropriate services before tragic violence occurs.
Moore-Petinak et al.'s [7] study also suggests additional research is necessary to better understand the critical components of effective shooter drills and how to successfully implement them on the one hand, and how best to prepare students, on the other. Thus, beyond understanding individual-level risk factors designed to identify vulnerable students, systems-level approaches are also necessary. For example, implementation research could identify facilitators and barriers to successful implementation of evidence-based strategies, and school leaders could focus on developing a strong school climate to foster trusting and positive relationships and connectedness among faculty, staff, and students.
At the interpersonal and organizational levels, school leaders set the tone and the expectations for behavior, which then affects faculty and staff and subsequently students. Therefore, executing active shooter drills successfully starts at the top and far in advance of the drill itself by fostering safe and supportive learning environments. To this end, research [11] See Related Article on p.509 www.jahonline.org has demonstrated higher perceptions of academic support were strongly associated with not only higher reported grade point averages but also with lower reported incidents of fighting and being bullied and increase in feeling safe at school. Socialemotional learning programs are associated with reductions in violence perpetration because they help adolescents develop a shared set of skills that help them thrive in their learning and social environments by more effectively handling life challenges [12]. Furthermore, noteworthy among successful schoolbased bullying interventions [13,14] are the modification of the school climate, including but not limited to, increased teacher involvement and supervision and clear order and disciplinary policies to improve the nature of relationships among teachers, students, and their school.
For schools and districts that continue to conduct active shooter drills, Everytown for Gun Safety Support Fund, the American Federation of Teachers, and the National Education Association offer additional evidence-based guidance [15]. These groups advocate for avoiding drills simulating actual incidents; notifying parents, students, and teachers in advance of planned drills; creating age and developmentally appropriate content in conjunction with teachers and school-based mental health specialists; combining drills with support systems to address student well-being; and evaluating drill effectiveness. Some of these recommendations are also consistent with those made by Moore-Petinak et al. [6].
Finally, at the community and public policy levels, grass roots efforts such as the March of Our Lives gun-safety movement must continue their demands for greater action from lawmakers to pass legislation addressing gun violence and limiting gun access. These capable advocacy groups must continue to point out successful examples from other industrialized nations where gun violence has been significantly reduced. For example, Japan experienced just six gun deaths compared to 33,599 in the United States in 2014 [16]. Japan has accomplished this milestone through a variety of policies, including mandatory educational training, written and shooting examinations, mental health and drug screenings, and criminal records searches. Even when citizens are granted a gun license, it expires after three years, handguns are not permitted, new cartridges can only be purchased by returning used cartridges, gun retailer densities are limited, and police reserve the right to inspect, search, and seize weapons. While some of these policies may not be possible in the United States owing to individualistic cultural norms and freedoms, recent research [17] suggests child access protection laws (making the storage of guns or ammunition accessible by a child illegal) display the strongest evidence of subsequent reductions in fireman deaths when compared to right-to-carry and stand your ground laws. Hence, there are clearly steps our nation could take to further protect our most vulnerable national treasure: our children. | 2020-09-17T13:06:36.593Z | 2020-09-17T00:00:00.000 | {
"year": 2020,
"sha1": "b70c88816253270f85e9ac21ba8d5b9f879b6e59",
"oa_license": null,
"oa_url": "http://www.jahonline.org/article/S1054139X20304262/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b70c88816253270f85e9ac21ba8d5b9f879b6e59",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
158732765 | pes2o/s2orc | v3-fos-license | The International Treaty on Global Warming: Is it Good or Bad for the Economy?
Global warming is one of the hottest topics all over the world. International authorities have worked together to negotiate the Paris Agreement on global warming. This Agreement has its supporters and critics. The key question is whether on balance is the Paris Assignment good or bad for the United States economy. This paper begins with some background information leading up to the passage of the treaty. Next, I outline what is in treaty. I then critically analyze the arguments in support of and against the Assignment. Finally, I explain the basis for my opinion that in the long run the treaty will benefit the United States economy.
Introduction
Scientific studies show that human beings cause global warming. Our voracity and endless craving eventually lead to a more vulnerable planet. Fortunately, we have just awoken from ignorance, searching for a way to better save this homeland. Consequently, "Green GDP", "Green Development" and other "green" initiatives have become popular. Because we share earth's environment, global environmental problems cannot be solved by one country. Instead, all the countries who are responsible for polluting the environment most act together to reduce overall pollution. Although signing an international treaty on global warming is a logical first step to combat global warming, the treaty could have some potential adverse consequences for the United States economy.
As a developed country, the U.S relies more on hightech industries. Therefore, the issue arises: whether an international treaty on global warming is good or bad for the United States economy?
Opinions are divided on this issue. Some people believe that the treaty will overall be good for American business, and ultimately lead to a strong economy. Others argue that the treaty will hurt the United States economy because reducing greenhouse gases emission will increase the cost of production, thereby making American products more expensive and placing them at a competitive disadvantage in the global marketplace. Critics also argue that those countries who do not sign this treaty would share the benefit of environment protection without paying for it, thus raising moral issues and damaging the relationships among nations.
What is the treaty?
The United Nations Framework Convention on Climate Change (UNFCCC) is an international treaty that establishes a framework for how specific international treaties ( "Kyoto Protocols" and "Paris Agreement") may be negotiated to set binding limits on greenhouse gas emissions.
Kyoto Protocol
Under the Kyoto Protocol, signatory nations agreed to binding emission reduction targets by controlling emissions of the main anthropogenic (i.e., human-emitted) greenhouse gases (GHGs) in ways that reflect underlying national differences in GHG emissions, wealth, and capacity to make the reductions (Grubb, M. ,2004).Recognizing that developed countries are principally responsible for the current high levels of GHG emissions in the atmosphere as a result of more than 150 years of industrial activity, the Protocol places a heavier burden on developed nations under the principle of "common but differentiated responsibilities" (UNFC,1). The Kyoto Protocol has three main mechanisms: International Emissions Trading, Clean Development Mechanism (CDM), and Joint implementation (JI).
The Paris Agreement
The Paris Agreement sets out a global action plan to put the world on track to avoid dangerous climate change.
The Paris Agreement requires the signing nations to rapidly reduce their mission of greenhouse gases using the best available science to keep the increase in the average global temperature to well below 2°C above pre-industrial levels (European Commission, 1).And the INDC's require each individual country to make individual contributions in order to achieve the worldwide goal. Each nation must report their individual contributions every five years and are to be registered by the UNFCCC Secretariat. Countries can cooperate and pool their nationally determined contributions.
3.1The Arguments in favor of the Treaty
Advocates in favor of the treaty argue that the treaty will improve the United States economy because it requires participating nations to come up with ways to address global warming. The treaty uses a two-handed approach on the one hand, it requires the participants stop activities which would aggravate global warming. On the other hand, countries must also develop technology to better face global warming.
Pro-treaty advocates argue that limiting gas emissions will create a green economy. The rule in the treaty restricts maximum greenhouse gases emissions in every participating country, so the country has to limit its economic activities to meet this standard. This means more and more factories which aim at manufacturing will be closed, more coal-powered industries will be forbidden. The trend will be that the manufacturing-driven economy will change into a service industry-driven economy. On the other hand, the decrease in fossil fuel usage will certainly increase the potential of clean energy, including wind energy, water energy, tide energy and bioenergy. All these practices will not do harm to the nation's development, instead, it will promote the economy developing in a green manner.
According to pro-treaty advocates, this emission standard will propel new technology. A high gas emission industry can still exist as long as it complies with the rule. So if a heavy industry wants to survive, it must innovate. It can improve the performance of devices-increase the production efficiency-or come up with new high efficient and environment-friendly production modes. Both of these approaches will require new technology.
3.2The Argument against the Treaty
Just as there are many people who support the Paris treaty, there are many others who oppose it. Anti-treaty advocates believe that there is no global warming problem and therefore argue that the Paris treaty is unnecessary. Some of the more radical scientists who share this believe are referred to by the derogatory term "deniers." Other individual scientists, institutions, and organizations believe there was a break or "hiatus" in the global warming theory.
Anti-treaty advocates point to the most reliable temperature data, from orbiting weather satellites, which show that there has been no warming for nearly two decades. Despite the constant barrage of hyperventilating headlines of a melting planet and the unceasing clamor of climate catastrophists and computer modelers, global temperatures have not been rising as predicted -except in the always-wrong computer models.
If opponents take the position that there is no such thing as "global warming", they claim that signing a global treaty concerning global warming is vain. There is still another part of opponents, who show their disagreement by emphasizing the damage brought by such an international treaty.
Former President George W.Bush rejected for two reasons. First, he believed that the costs of reducing greenhouse gases would impose disproportionate disbenefit on the American economy in pursuit of a stilluncertain benefit. Second, he disliked the treaty not binding poor or developing countries to curb emissions (Lee Lane, 2006).Bush thought the treaty was a Zero-Sum Game, and in this game, China would be the winner, while the U.S would be the loser.
The U.S. government's own research has confirmed that domestic programs to reduce greenhouse gas emissions would wreak havoc on the economy, sending jobs overseas to countries such as Mexico and China. The Argonne National Laboratory in the Department of Energy studied the economic effects of proposed greenhouse gas emissions cuts on six domestic industries (wood and allied products, steel, petroleum refining, aluminum, chemical manufacturing, and cement production) and found that they would be devastating. Numerous studies have shown that meeting any new treaty commitments would result in a dramatic decline in U.S. gross domestic product (GDP) (Trisko, 1997).
What is more, in 1992 the U.S. Department of Commerce released a study by DRI, Inc., the study found that job losses resulting from the treaty would average between 520,000 and 1.1 million per year, depending on whether the CO2 emission goal was 1990 levels or 10% below 1990 levels. More than 5 million additional jobs would be at risk due to these policies, with Texas, California, Pennsylvania, Ohio, Illinois and Michigan facing the greatest job losses (Trisko, 1997).
My position
After much research and thought, I have concluded the advantages of the international treaty on global warming outweigh its disadvantages. The purpose of the international treaty should be to solve while at the same time advance the overall economy in the end. Signing the international treaty would ensure that many countries will come together to protect the planet. As Obama says, "Paris Agreement is the best chance we have to save the one planet we have" (Elizabeth, 2015). Most importantly, it is a good way to contribute to the sustainable development which is always the core issue among the nations.
The treaty stimulates the environment-friendly business, benefiting a sustainable economy. As the emerging of sustainable development, the "green" company is standing out among his competitors. According to a UCLA-led study, companies that voluntarily adopt international 'green' practices and standards have employees who are 16 percent more productive than the average( Jon Simmons, 2015) .High productive employees make the company more energetic and competitive, and this kind of company is more attractive to customers since they also want to make green choices. In the United Kingdom, 54% of consumers buy more environmentally friendly products compared to two years ago (Meglena, 2009). The international treaty is not merely a task among nations; the majority of the citizens want to play a part in reducing global warming.
Those who oppose the treaty claim that the global warming being on pause for two decades, and that the high cost of enforcement of the treaty making it not worth the candle are short-sighted. Although I admit that we should use the term "climate change" as opposed to "global warming" to describe the environmental devastation, we have to attach great importance on the environment degradation. It seems that we have spared our effort on something that not exactly exists, but the decreasing rainforests, rising sea level prove such efforts we make are necessity. We should put environment protection in the leading position when doing business for the good of the very people and future generations. As for the complaint that the treaty places uneven requirements on participating nations, I think it should no longer be a reason for the treaty's hurting American economy. Under the Paris Agreement, the INDCs mechanism will ensure that every country contributes its part in the context of its own national circumstances.
Conclusion
Global warming is a topic that we cannot ignore. As humans, we must maintain the delicate balance between the environment and the economy. The international treaty provides the tool we can use to achieve this goal. We need to protect the vulnerable environment, but also contribute to promote a strong economy.
However, people are divided on the effect on American economy that the international treaty has. Those who against the treaty claim that the high cost and unequal contribution brought by the treaty would harm the economy as a whole; some in favor of the treaty hold that the joint effort would totally create global cooperation.
I embrace the international treaty, because it will benefit the United States economy in the long run. To be honest, I was neutral when I first began to write this paper, believing that the treaty was a double-edge sword-it brought benefit, but also hurt the economy to some extent.
However, after researching the treaty, I changed my mind because my research shows that the treaty will be beneficial. Even if global warming is on pause as proved in some scientists' research, we should never gain economic growth at the expense of the environment degradation. Our ambition to cope with the climate change will eventually turn into an economic opportunity, shared by all the participants who fulfill their responsibility stipulated in the treaty. In the meantime, I highly recommend that the treaty be modified to enable all nations to participant voluntarily and without a compulsory provision. This will refuse the argument that the practice is a zero-sum game.
On my honor, I have neither received nor given unauthorized assistance in any manner on this paper. | 2019-05-20T13:05:14.606Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "38423f37d866063dc6eb0a3bda3143e9e01e0c11",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/13/e3sconf_icemee2018_01017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e0048b657f51d3b35175f91731045554837d5e3e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
232319389 | pes2o/s2orc | v3-fos-license | The influence of teacher annotations on student learning engagement and video watching behaviors
While course videos are powerful teaching resources in online courses, students often have difficulty sustaining their attention while watching videos and comprehending the content. This study adopted teacher annotations on videos as an instructional support to engage students in watching course videos. Forty-two students in an undergraduate course at a university in Taiwan were randomly divided into a control group that watched a course video without teacher annotations, and an experimental group that watched a course video with teacher annotations. The collected data included a learning engagement survey, students’ video watching behaviors, and student interviews. The results showed that there were differences in student learning engagement between the control and experimental groups. The teacher annotations increased students’ behavioral and cognitive engagement in watching the video but did not increase their emotional engagement. In addition, this study identified how students learned when watching the course video with the teacher annotations through highlights of the video content, literal questions, reflective questions and inferential questions. The results concluded that teacher annotations and student learning engagement were positively correlated. The students acknowledged that their retention and comprehension of the video content increased with the support of the teacher annotations.
learning behaviors, learning strategies (Tsai, Lin, Hong, & Tai, 2018), and learning experiences (Shen, Cho, Tsai, & Marra, 2013). However, little research has paid attention to course videos which students watch to acquire knowledge in online courses. One of the research topics which remains underexplored is how to improve students' learning engagement when watching course videos.
Course videos are powerful primary teaching materials that teachers use to present learning content in online courses in subjects such as language, science, engineering and physics (Aldera & Mohsen, 2013;Dufour, Cuggia, Soula, Spector, & Kohler, 2007). The popularity of course videos in online courses is attributed to their multimedia presentation of learning content using moving images, audio, and text to help students gain a deep understanding of abstract content, which can be difficult to verbalize but easy to demonstrate (Lange & Costley, 2020). The visual and auditory features of course videos also enable students to better retain learning content in online courses (Jonassen, Peck, & Wilson 1999;Schnotz & Rasch, 2005). Another useful feature of course videos for student learning is control features such as "play", "forward", and "stop, " which allow students to consume learning content at their own pace. Based on these features, videobased content is shown to be better than reading-based content in improving students' learning outcomes (Love, Hodge, Grandgenett, & Swift, 2014).
Although course videos are powerful teaching resources in online courses, problems with course videos have been discussed in the literature. For example, some students cannot sustain their attention when the video content is long and difficult to understand (Hughes, Costley, & Lange, 2019). The absence of teacher support is another problem with course videos. Without teacher support, students, especially beginners not familiar with video topics, often encounter difficulty in comprehending the video content and sustaining their attention (Homer, Plass, & Blake, 2008). Thus, teacher support should be provided to assist students to comprehend video content and improve their learning engagement (Ronchetti, 2010;Zhang, Zhou, Briggs, & Nunamaker Jr, 2006). This study adopted teacher annotations on videos as a method of instructional support to address the following research questions: What is the difference in learning engagement between the students who watched a course video with teacher annotations and those who watched a course video without teacher annotations? What is the relationship between the teacher annotations and the students' learning engagement? What are the students' perceptions of watching a course video with teacher annotations?
Learning engagement and technologies
Learning engagement refers to the time and effort which students invest in learning activities (Heflin, Shewmaker, & Nguyen, 2017). It typically involves three components: behavioral engagement, cognitive engagement, and emotional engagement (Appleton, Christeson, & Furlong, 2008). Behavioral engagement is the students' participation in learning activities, such as playing, stopping, and rewinding the course video (Fredricks, Blumenfeld, Friedel, & Paris, 2005;Goggins & Xing, 2016). Cognitive engagement is the cognitive effort students make to acquire and comprehend complex concepts, and it involves the use of high-order thinking skills such as analysis, reasoning, and critiquing (Finn & Zimmer, 2012;Ding, Kim, & Orey, 2017). Emotional engagement is regarded as students' psychological perceptions of learning activities (Jung & Lee, 2018). These three components of learning engagement were used as criteria to predict students' learning performance and success in learning environments (Phan, McNeil, & Robin, 2016).
Past researchers (e.g., Cheng & Chiu, 2016;Topu & Goktas, 2019) have explored technologies to foster student learning engagement. These technologies included gamification tools (Ding, et al., 2017), Information and Communication Technologies (ICTs) (Chen, Lambert, & Guidry, 2010), and 3D virtual environments (Topu & Goktas, 2019). Göksün and Gürsoy (2019), for example, utilized gamification tools such as Kahoot to promote learning engagement and academic performance. They divided a total of 97 participants into the Kahoot experimental group (N = 30), the Quizizz experimental group (N = 33), and the control group (N = 34). The three groups underwent the sixweek-long instructional activities and took an academic achievement test and a student engagement survey before and after the instructional activities. Their results showed that the Kahoot experimental group had higher scores on the student engagement survey and on the academic performance test than the control group. In the same vein, Rashid and Asghar (2016) also found that the students who used ICT technologies such as social media, internet search engines, and video games in their learning scored higher in the traditional student engagement measures. In addition, 3D virtual environments such as Second Life have been developed to allow students to deal with realistic tasks and interact with 3D avatars to effectively engage them in learning activities (Topu & Goktas, 2019). These research findings demonstrate the potential of using technologies to improve student learning engagement.
Learning engagement in video watching
Although the value of using technologies to support learning engagement has been well established, few studies have investigated how to promote student learning engagement in watching online course videos. Past studies mainly focused on the development of annotation tools to help students comprehend video content. These annotation tools included Microsoft Research Annotation System (MRAS), Media Annotation Tool (MAT), VideoANT, Open Video Annotation, and Annotating Academic Video. MRAS was the earliest video annotation tool, designed by Bargeron, Gupta, Grudin, and Sanocki (1999) to enable students to take notes on a section while watching a video. Bargeron et al. (1999) compared the video annotations with handwritten note-taking to investigate students' preferences for the two types of the annotations. The results showed that the students preferred video annotations over traditional note-taking because taking notes on the video made it easier to organize and contextualize the notes within the video content. Later, video annotation tools evolved from individual annotations to collaborative annotations, which encourage group discussion on video content. MAT, for example, was developed by RMIT University to allow students to read each other's video annotations and provide feedback. In these annotation tools, students were able to annotate a video section and read and reply to peers' video annotations. But asking students to annotate videos did not absolutely enhance their learning engagement in watching online course videos (Piolat, Olive & Kellogg, 2005;Risko, Foulsham, Dawson, & Kingstone, 2013).
Students' difficulty in annotating videos according to the cognitive theory of multimedia learning (CTML) (Mayer, 2001) is due to the limited cognitive capacity of working memory. Students can only process a small portion of the visual and auditory information in a video at one time in their working memory. Students may stop and replay the video content several times so as to reflect on the content and identify the gap between the video content and their existing knowledge structure. Annotating videos can thus become an intensive and overwhelming cognitive activity for students. Piolat et al. (2005) also found that students felt cognitively overwhelmed while annotating and watching videos at the same time. Annotating videos became a distraction for the students, especially when the students were not familiar with the video content. Therefore, asking students to annotate videos significantly increased their cognitive load. In addition, students' annotations on videos are not thought-provoking enough to promote deeper reflection on video content. Risko et al. (2013) indicated that students preferred teachers to annotate video content so they could read the annotations to better comprehend the video. These studies suggested that teacher annotations on videos are necessary to engage students in watching course videos.
The impact of the teacher annotations on student learning engagement in watching videos remains underexplored because past research focused on students' comprehension of video content and student annotations (e.g., Mu 2010;Li, Kidziński, Jermann, & Dillenbourg, 2015). It is still unknown how teacher annotations can support student engagement in watching course videos. In addition, previous studies relied on self-reported data, such as interviews. The use of self-reported data has been criticized because the data might provide a biased result rather than an accurate depiction of the actual learning behaviors (Gonyea, 2005). Video analytic techniques are encouraged to supplement the self-reported data to assure the interpretations of the results are valid and consistent. To fill in these research gaps, this study aimed to investigate how teacher annotations engage students in watching course videos through video analytics, which can capture a more objective and nuanced picture of the student's behaviors while watching videos embedded with teacher annotations.
The rationale of teacher annotations on videos
Teacher annotations on videos are defined as the behavior of adding notes on a specific segment of a video (Cross, Bayyapunedi, Ravindran, Cutrell, & Thies, 2014). Teacher annotations on videos are regarded as a teaching strategy to align with students' cognitive structures to overcome the limited capacity of students' working memory by attracting attention and indexing and organizing information (Barak, Herscoviz, Kaberman, & Dori, 2009;Jones, Blackey, Fitzgibbon, & Chew, 2010). These features of annotations support the cognitive process of watching course videos by helping students (a) focus their attention on an important segment of video content, (b) interpret or summarize video content, and (c) share personal reflections on the video content (Bargeron, et al., 1999;Ibrahim, Callaway, & Bell, 2014).
The demonstration-based training (DBT) model could be used as a pedagogical design to integrate video annotations into course videos through four interrelated processes: attention, retention, production, and motivation (Grossman, Salas, Pavlas, & Rosen, 2013). Attention refers to the active process of filtering and selecting video content to be transferred into working memory (Anderson, 2010). To sustain students' attention to the video content, teachers can highlight key points by adding an arrow or a circle (Richter, Scheiter, & Eitel, 2015). The second process of the DBT model, retention, is the process of organizing and integrating video information into the existing knowledge structure in the long-term memory (Bandura, 1986). One strategy for supporting students' retention of video content via video annotation is the use of tagging. Tagging is used to annotate the key points of a short video section with a word or a short phrase. The tags on the video section can help students summarize and grasp the structure and concepts of the learning content.
The last two processes of the DBT model are production and motivation. Production entails students using what they have learned from the video content to accomplish a learning task, and motivation refers to students' engagement in the process of watching course videos (Bandura, 1986). To facilitate production and motivation, teachers can use video annotations to build an interactive learning environment by generating questions on a video section for students to answer. The teachers' questions on the video section, furthermore, can prompt students to re-examine, clarify, defend, or elaborate on their thoughts about the video content, thus leading to deeper comprehension and engagement (Kessler & Bikowski, 2010;Storch, 2005).
Participants
Forty-two students were recruited from an undergraduate course at a university in Taiwan. The age of the students ranged from 20 to 22. Among the 42 students, 25 were female and 17 were male. They had taken online courses in which they were required to watch online course videos to acquire knowledge. However, this was the first time for them to watch a course video with the support of teacher annotations. The 42 students were randomly divided into a control group (N = 20) that watched a video without teacher annotations and an experimental group (N = 22) that watched a course video with teacher annotations.
Research design
The research was conducted over 9 weeks. In week 1, the researcher randomly divided the students into control and experimental groups and introduced the research project. When introducing to the research project, this study adopted the blinding strategy to reduce the Hawthorne effect. The researcher did not announce that the students would be allocated to either the control or experimental groups, and stated that all students would watch a course video with teacher annotations in a separate period of time: before and after week 9. Before week 9, the experimental group watched the video with teacher annotations, while the control group watched the video without teacher annotations. After week 9, the researcher did not collect the data from students but allowed the control group to watch the video with teacher annotations and the experimental to watch the video without teacher annotations. By doing so, the students received the equal opportunity of watching the video with teacher annotations and did not know they were allocated to either the control or experimental group, which reduced the Hawthorne effect. From weeks 2 to 4, the researcher selected and annotated a course video from Youtube. The length of the video was around 15 min and presented a story from Chinese literature which the students had never seen before. VideoANT, a web-based annotation tool, was used to allow the researcher to make timeline-based textual comments in synchronization with videos and shared annotations with the students. VideoANT was selected for several reasons. First, VideoANT supports teacher-student interactions by allowing teachers to (a) tag a video segment on which they wish to make a comment, and (b) share their annotations for their students to read and reply to (see Fig. 1), while most video annotation tools, such as TurboNote, ReClipped, and Cincopa, do not allow students to provide feedback on teachers' annotations. Second, VideoANT is an open educational resource offered to everyone, unlike commercial annotation tools which require users to pay to access their full set of features. Third, as VideoANT has been widely applied by previous researchers in educational settings, the use of VideoANT in this study could contribute to the literature by providing pedagogical suggestions for further use of VideoANT in educational settings.
During weeks 5 and 6, the researcher assigned the annotated course video for the students to watch. Both the control and the experimental groups watched the same video. The students in the experimental groups watch the video with the teacher annotations, while the control group watched the video without seeing annotations. The researcher as the teacher used the DBT model as an instructional framework to annotate the course video in three ways: by tagging, questioning, and signaling (see Fig. 2). Based on the DBT model, the teacher annotations were categorized into the (a) highlights of the video content with a phrase or a keyword, (b) the literal questions of the video content, (c) the reflective questions of the video content, and (d) the inferential questions of the video content. In the course video, the teacher annotations consisted of 3 highlights, 2 literal questions, 3 highlights of video content, 2 reflective questions, and 2 inferential Fig. 1 The VideoANT interface questions. Students in the experimental groups were told they could decide whether to view the teacher annotations and respond to the annotations. After watching the video, both groups immediately took a learning engagement survey and were also asked to write a short reflection on their learning experience while watching the video.
Data collection and analysis
The collected data included (a) a learning engagement survey, (b) students' video watching behaviors, and (c) student interviews. A learning engagement survey was distributed to the control and experimental groups after they watched the video to examine the differences in learning engagement between the two groups (RQ1). The learning engagement survey was adapted from the School Engagement Measure (SEM; Fredricks, et al., 2005), which consists of 21 Likert-scale items (5 = agree; 4 = basically agree; 3 = hard to say; 2 = not quite agree; 1 = disagree). Each of the 21 items was categorized as behavioral engagement, emotional engagement, or cognitive engagement. Examples of the question items in the survey or the three categories above are I watched the video several times towards understanding the course video (behavioral engagement), I am interested in participating in watching the course video (emotional engagement), and I always try to understand the course video even if it is not easy (cognitive engagement). Fredricks et al. (2005) reported Cronbach's alpha values of 0.72 for behavioral engagement, 0.83 for emotional engagement, and 0.84 for cognitive engagement. The learning engagement surveys of the control and experimental groups were analyzed using a multivariate analysis of variance (MANOVA) to compare the learning engagement between the two groups in watching the course video.
The students' video watching behaviors were recorded via a screen recorder app. Students were interviewed to explain their watching behaviors including clicking on the teacher annotations and playing, rewinding, and forwarding the video. The examples of the interview questions included "Why did you pause, resume, forward the video at this moment", "Why did you click on the teacher annotations?", "Why did you click on the peers' answers to the teacher annotations?". The students' video watching behaviors were analyzed with the learning engagement survey using the Pearson correlation to investigate the relationship between the teacher annotations and student learning engagement (RQ2). Interview data were then collected to further explain how the teacher annotations were related to the students' learning engagement in watching the video. The extremecase sampling strategy was used to select students from the experimental group for the interview, as Patton (2002) indicated that "more can be learned from intensively studying extreme or unusual cases than can be learned from statistical depictions of what the average case is like" (p. 170). Thus, the three students who had the highest scores on the learning engagement survey were selected to be interviewed by the researcher at the end of the experiment. The three students were asked to explain their video watching behaviors during the interview. The students' interview data were analyzed using Braun and Clarke's (2006) thematic analysis. Three themes related to students' learning paths emerged from the data: reflective questions, literal questions, and teacher's highlights of video content.
The student interview was conducted with the students in the experimental group after they watched the course video with annotations. During the interview, the students responded to the such questions as, "Why did you click pause or resume the video at this moment", "Why did you click on the teacher annotations?", "What did you like or dislike about the teacher annotations?" and "How did you benefit from the teacher annotations when watching the video?" to explore students' perceptions of watching the course video with teacher annotations (RQ3). The students' responses to the questions were analyzed using an inductive analysis approach (Attride-Stirling, 2001;Seidman, 2006) featuring a sequence of activities including (1) organizing and reading through data, (2) coding data, (3) generating themes, (4) interrelating themes, and (5) interpreting the themes.
Differences in learning engagement between the students who watched the course video with teacher annotations and those who watched the video without annotations
The students who watched the course video with teacher annotations and those who watched the video without annotations all took a learning engagement survey after watching the video. Table 1 shows the means and standard deviations of each learning engagement variable for the two groups. The standard deviations of the three learning engagement variables were below 1, indicating that they are clustered closely around the mean. The multivariate result presents a significant difference in the learning engagement of the two groups, Wilk's lambda = 0.49, F = 13.08, p = 0.00 (see Table 2). The effect size (Eta Square, η 2 ) was calculated to measure the magnitude of the difference when a significant difference was observed. Cohen (1992) stated that the larger effective size the stronger the relationship between variable and suggested that the effect size of 0.20, 0.50, and 0.80 denote small, medium, and large effect sizes respectively. The η 2 = 0.58 reached a medium effective size, showing that 58% of the learning engagement was associated with the teacher annotation scaffold. The results suggest that the use of teacher annotations on the course video resulted in the differences in student learning engagement in watching the video. ANOVAs were conducted to investigate the statistical difference in each of the learning engagement variables between the two groups. The univariate ANOVA results show significant differences in behavioral engagement, F(1, 40) = 33.31, p = 0.00, and cognitive engagement, F(1, 40) = 15.29, p = 0.00. However, there was no significant difference in emotional engagement F(1, 40) = 3.32, p = 0.08. The results show a medium effect size in behavioral engagement, η 2 = 0.45, and a small effect size in cognitive engagement, η 2 = 0.28. The ANOVA results (see Table 3) suggest that the teacher annotations could increase students' behavioral and cognitive engagement in watching the course video, but not their emotional engagement.
Relationship between the teacher annotations and learning engagement
The teacher annotations consisted of 2 literal questions, 3 highlights of video content, 2 reflective questions, and 2 inferential questions. The average number of student clicks on these four types of teacher annotations is shown in Table 4. "Literal questions" and "Highlights of video content" were the top 2 teacher annotations that the students clicked on when watching the course video. The Pearson correlation results indicate that the literal questions were positively correlated with cognitive engagement, r = 0.484, p = 0.022, and behavioral engagement, r = 0.536, p = 0.010. The highlights of video content were positively correlated with cognitive engagement, r = 0.591, p = 0.004. The reflective questions were positively related to cognitive engagement, r = 0.561, p = 0.007. However, the inferential questions were not related to any types of learning engagement. These results conclude that the teacher's literal questions could increase students' behavioral and cognitive engagements in watching the course video. The highlights of the video content and reflective questions could also promote students' cognitive engagement in watching the course video. Student 1 (S1), student 5 (S5), and student 9 (S9) were selected from the experimental group to illustrate how the teacher's literal questions, highlights of the video content, and reflective questions enhanced student behavioral and cognitive engagement in watching the course video. S1, S5, and S9 were selected using the extreme case sampling strategy to represent the students who had higher behavioral and cognitive engagement (see Table 5). Emotional engagement was not discussed because no significant improvement in emotional engagement was found among the students.
Teacher annotations: literal questions
The teacher's literal questions were intended to elicit responses directly stated in the video content. Figure 3 shows an example of the learning path prompted by a literal Table 4 The total number of student clicks on the teacher annotations (N = 22)
Types of teacher annotations Average number of students' clicks on the teacher annotation
Literal questions 3.5 Highlights of video content 2.05 Reflective questions 1.32 Inferential questions 0.64 Fig. 3 S1′s learning path following the literal question question: What did the little boy take from the tree? After clicking on the literal question, S1 went back to a video segment to review the video content. She also clicked on the teacher's highlights of the video content to look for the answer and reviewed the question before answering. It was observed that the learning path of the literal question was an interactive process in which S1 rewound the video, clicked on the teacher's highlights, and reviewed the literal question several times until she could find the right answer to the question. As Fig. 3 shows, S1 clicked on the literal question 4 times, rewound the video 6 times, and clicked on the teacher's highlights 6 times. When asked to describe her learning path following the literal question, S1 stated, "It is not easy to find the correct answer to the teacher's questions [the literal questions]. So I watched the video back and forth and read the question many times to get the right answers. " She further elaborated, "But I liked the experience. The questions pushed me to spend more time watching the video and I was able to better remember the video content. " These results indicate that the literal questions which the teacher annotated on the video promoted the student's cognitive and behavioral engagement by stimulating the student to review and remember the video content.
Teacher annotations: reflective questions
The reflective questions asked the students to relate the video content to their life experiences or share personal feelings about the video. The learning path prompted by the reflection question was different from that of the literal questions. As Fig. 4 shows, S5 stopped the video 2 times and read peers' responses 4 times after clicking on the reflective question "Has anyone done the same thing to you in your life?".
Stopping the video and viewing peers' responses were the two unique behaviors prompted by the reflective question. When asked about his learning path following the reflective question, S5 explained that he stopped the video because "the questions [reflective questions] were about personal stuff. Even though I could understand the video content, I still need time to find the connection between the video content and my life." The statement shows that the reflective questions led the students to go beyond their current understanding of video content to build the connection between the video content and their life experiences. Therefore, the reflection question enabled the students to increase their cognitive engagement in video watching. S5 also stated that reading peers' responses to the literal questions were interesting because "everyone's answers were different. So, I could learn something from reading peers' responses. " The statement demonstrates that the reflective questions were positively related to student cognitive engagement because it motivated the students to discuss and analyze the video with their classmates, which helped them comprehend the video from different perspectives.
Teacher annotations: highlights of video content
Highlighting video content was a means of signaling important points in the video's message. S9′s learning path prompted by the teacher's highlights involved first clicking on the teacher's highlights, then continuing the video, and finally going back to review the highlights. Figure 5 shows that S9 reviewed the teacher's highlights 5 times. Such behaviors represented S9′s cognitive engagement in comprehending the video content. S9 explained that "because the teacher's highlights were just like hints for the important message of the video content, I think it is important to read them carefully. " In other words, the teacher's highlights functioned to direct the students' attention to the important video content and encourage the students to devote more cognitive effort to understanding the main video messages by reviewing.
Students' perceptions of watching the course video with the teacher annotations
This was the first time for the students to watch a course video with the assistance of teacher annotations. Nineteen out of 22 students enjoyed the learning experience, as S2′s comment reflects: "The activity was fun. In the past, the teacher simply asked us to watch the video, which was a little boring. " These 19 students identified the benefits of the teacher annotations. For example, four students made the following statements in their reflections: "I can remember the important message of the video content with the teacher's highlights" (S2). "I can better understand the video content" (S3). "The teacher questions helped me recall the video content" (S8). "I replayed the video several times in order to answer the teacher questions" (S17). As these statements show, the first benefit of the teacher annotations for the students was to help them comprehend the video at the surface level. For example, the teacher's highlights made it easy for the students to locate the main video message so that they could remember the factual information in the video. The teacher questions were also useful in helping them retain the video content since the students reviewed the video in order to answer the questions correctly.
Deep comprehension of the video content was achieved by the students with the assistance of the teacher annotations and their peers' responses. For instance, six students made the following statements in their reflections: "I liked the teacher questions, because the teacher questions pushed me to think more about the video content" (S9). "I especially liked the peer responses. After reading those responses, I would have more ideas or questions about the video content" (S15). "Reading my classmates' responses was the fun part because some of their answers were creative and interesting" (S17). "They [the responses] helped me clarify the points I did not understand in the video" (S20). "I spent more time reading peers' responses [to the teacher questions]" (S1). "I like the teacher annotations. But it is more interesting to read peers' responses to the teacher annotations. " (S14).
As the excerpts above demonstrate, the teacher questions and the peers' responses opened the eyes of the students to different ways of interpreting the video content. Specifically, their peers' responses supported the students to extend the video content beyond personal understanding, as S15, S17, and S20 stated in the excerpts. The excerpts of S1 and S14 also show that the peer responses helped to sustain students' attention and enhanced their enjoyment of the video activity. The results suggest that teachers should focus more on the use of question type annotations to increase students' behavioral and emotional engagement in watching the course video.
Discussion and conclusions
This study examined the influence of the teacher annotations on student learning engagement using an annotation tool, VideoAnt. The results corroborate previous studies (e.g., Colasante & Douglas, 2016;Miller, Zyto, Karger, Yoo, & Mazur, 2016;Mirriahi, Jovanovic, Dawson, Gašević, & Pardo, 2018;Pardo et al., 2015) showing that annotation tools are beneficial for enhancing student learning engagement. The results further discovered that the teacher annotations fostered the students' behavioral and cognitive engagement, but not emotional engagement. The results echoed Colasante and Lang's (2012) study showing that the students' emotional engagement decreased while students actively engaged in learning with the teacher annotations through a media annotation tool. This study attributed the students' low emotional engagement to the fact that the teacher annotations distracted them from watching the video, as several students indicated that they could not enjoy the video because they needed to pay attention to the teacher annotations. But this study would argue that the students' low emotional engagement did not synergistically interact with their cognitive and behavioral engagement, since the students who saw the annotations demonstrated higher behavioral and cognitive engagement. Teachers can add annotations on a course video to promote students' learning engagement and increase their comprehension of the video content, even though this may decrease students' motivation. Future studies may explore how to foster students' emotional engagement when watching the course video with teacher annotations.
The main contribution of this study is to present the learning paths of the students that were prompted by the teacher annotations including teacher's highlights of the video content, literal questions, and reflective questions. These learning paths reveal how the teacher annotations benefited students' learning of the course video based on Bloom's Taxonomy. The teacher's highlights could support students to "remember" the video content, as the students clicked on the teacher's highlights to review the important segment of the video content. The results were in line with the findings of previous studies (Ibrahim, Callaway, & Bell, 2014) stating that annotations can reduce students' cognitive load of memorizing the course video. In addition, the learning paths showed that the teacher's literal questions prompted students to stop, rewind, and replay the video. Such results suggest that the teacher's literal questions motivated the students to clarify the video content to enhance their "comprehension" of the video. The other type of teacher annotation was the reflective questions, which encouraged the students to view their peers' responses. The results showed the potential of using reflective questions to facilitate the students to discuss and "analyze" the video content, as S15 said "I especially liked the peer responses. After reading those responses, I would have more ideas or questions about the video content" in the result section. The results above suggest that the teacher annotations effectively promoted students' cognitive engagement to help students remember and understand the course video content.
This study concludes that the teacher annotations and student learning engagement were positively correlated. Nineteen out of 22 students in this study acknowledged the value of the teacher annotations on the course video, as they perceived that their retention and comprehension of the video content increased with the support of the annotations. In line with previous studies (Chen, Li, & Chen, 2020), several students pointed out that they really liked the fact that the annotation tool allowed them to view peers' responses to the teacher's questions. In addition, this study found that very few students responded to the peers' responses that they read, possibly because the researcher did not give enough instruction regarding how to respond to peer feedback on the annotation. Future studies may explore how video annotation tools and teacher annotations can support collaborative learning while students watch the course video and investigate the interactive patterns among students that lead to successful performance.
There are few limitations of this study. Firs, the novelty effect could often occur with the introduction of novel technology (Tsay, Kofinas, Trivedi, & Yang, 2020). The introduction of teacher annotation might lead to the novelty effect on student learning engagement. As a result, this study utilized qualitative data including students' video watching behaviors and student interviews to examine and reveal students' motivations to engage with teacher annotations, which were addressed in the second and the third research questions. A longitudinal study is also encouraged to be conducted for future studies to examine the novelty effect on student learning engagement with the teacher annotations. Second, this study could only recruit 42 students due to the policy of maintaining small class size to improve students' learning. The small sample size may influence the generalizability of quantitative results; however, this study collected students' video watching behaviors and student interviews to validate the quantitative results and offer insights not been seen in the quantitative results. Third, this study did not investigate how the teacher annotations are helpful for beginners. Future studies can consider students' level to explore the impact of the teacher annotations on student learning engagement and exam performance. | 2021-03-23T10:30:03.796Z | 2021-02-18T00:00:00.000 | {
"year": 2021,
"sha1": "4b79d91c950b1a2785c9b61d7297ac03b12ba03e",
"oa_license": "CCBY",
"oa_url": "https://educationaltechnologyjournal.springeropen.com/track/pdf/10.1186/s41239-021-00242-5",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b79d91c950b1a2785c9b61d7297ac03b12ba03e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
248394633 | pes2o/s2orc | v3-fos-license | Essential Oils Biofilm Modulation Activity and Machine Learning Analysis on Pseudomonas aeruginosa Isolates from Cystic Fibrosis Patients
The opportunistic pathogen Pseudomonas aeruginosa is often involved in airway infections of cystic fibrosis (CF) patients. It persists in the hostile CF lung environment, inducing chronic infections due to the production of several virulence factors. In this regard, the ability to form a biofilm plays a pivotal role in CF airway colonization by P. aeruginosa. Bacterial virulence mitigation and bacterial cell adhesion hampering and/or biofilm reduced formation could represent a major target for the development of new therapeutic treatments for infection control. Essential oils (EOs) are being considered as a potential alternative in clinical settings for the prevention, treatment, and control of infections sustained by microbial biofilms. EOs are complex mixtures of different classes of organic compounds, usually used for the treatment of upper respiratory tract infections in traditional medicine. Recently, a wide series of EOs were investigated for their ability to modulate biofilm production by different pathogens comprising S. aureus, S. epidermidis, and P. aeruginosa strains. Machine learning (ML) algorithms were applied to develop classification models in order to suggest a possible antibiofilm action for each chemical component of the studied EOs. In the present study, we assessed the biofilm growth modulation exerted by 61 commercial EOs on a selected number of P. aeruginosa strains isolated from CF patients. Furthermore, ML has been used to shed light on the EO chemical components likely responsible for the positive or negative modulation of bacterial biofilm formation.
Introduction
The opportunistic pathogen Pseudomonas aeruginosa is a significant cause of healthcareassociated infections correlated with high morbidity and mortality in individuals with pneumonia, chronic obstructive pulmonary disease (COPD), or cystic fibrosis (CF) [1][2][3][4]. These infections are particularly problematic in intensive care units. For these reasons, Microorganisms 2022, 10, 887 2 of 14 this microorganism is included in the critical category of the World Health Organization's (WHO) priority list of pathogens for which the discovery of new therapeutics is urgently needed [5]. P. aeruginosa can cause both acute and chronic infections, since its pathogenic profile originates from a large and variable arsenal of virulence factors and antibiotic resistance determinants. In the airways of CF patients, P. aeruginosa persists, inducing a chronic infection; furthermore, it is widely known that the CF pulmonary environment confers multiple advantages to P. aeruginosa over other pathogens, such as Staphylococcus aureus and Klebsiella pneumoniae [6]. The ability to form a biofilm plays a pivotal role in CF airway colonization by P. aeruginosa. Indeed, among its various virulence factors, the ability to produce highly structured biofilms confers important advantages, including phenotypic resistance to host defenses, antibiotics, and disinfectants [7]. These characteristics prevent bacterial clearance and allow the establishment of highly recalcitrant chronic infections [8,9].
A novel strategy to fight P. aeruginosa infection could derive from the identification of compounds acting on the biofilm phenotype without affecting bacterial vitality; these antibiofilm compounds could also enhance the effectiveness of conventional therapies, particularly in chronic infections such as CF [10,11].
Herbal antimicrobials are considered as a potential alternative in clinical settings for the prevention, treatment, and control of infections sustained by microbial biofilms [12]. Essential oils (EOs) are complex mixtures of different classes of organic compounds, and they are usually used for the treatment of upper respiratory tract infections in traditional medicine [13]. Furthermore, bacteria fail to develop resistance to multi-component treatments such as EOs due to their multitarget actions [14].
Recently, a wide series of EOs from Mediterranean plants were investigated for their ability to modulate biofilm production by different pathogens comprising S. aureus, S. epidermidis, and P. aeruginosa strains [15,16]. In this study, Machine learning (ML) algorithms were applied to develop classification models in order to suggest a possible antibiofilm action for each chemical component of the studied EOs. An analysis of the ML models indicated the chemical components possibly responsible for the inhibition or stimulation of bacterial biofilms. In two recent publications, ML-based clustering was used to develop a convergent microbiological protocol in which 61 EOs were evaluated on 40 clinical isolated of S. aureus and P. aeruginosa strains from CF patients [16,17]. First, the antimicrobial activity of each EO was tested against each S. aureus and P. aeruginosa clinical strain. Then, the antibiofilm activity was evaluated in the same S. aureus clinical isolates [17]. Based on these results, in the present study, we assessed the biofilm growth modulation exerted by the same EOs on a selected number of P. aeruginosa strains isolated from CF patients. Furthermore, ML has been used to shed light on the EO chemical components likely responsible for the positive or negative modulation of bacterial biofilm formation.
Ethics Approval and Informed Consent
This research, performed according to the principles of the Helsinki Declaration, was approved by the ethics committee of the Children's Hospital and Institute of Research Bambino Gesù (OPBG) in Rome, Italy (no. 1437_OPBG_2017 of July 2017). The individual participants and parents/legal guardians of the patients have signed an informed consent form included in the study.
Description of P. aeruginosa Clinical Isolates from CF Patients
Six representative clinical P. aeruginosa strains were used in this investigation, previously selected by a mean of unsupervised ML clusterization, as recently described [17].
Patients were treated according to the current standards of care [18]. Microbiological cultures were performed according to the approved guidelines as already described in Ragno et al. [17]. In Table S1, the 18 qualitative descriptors used to cluster and define the six selected P. aeruginosa strains are described. Phenotypic and genotypic characteristics of these strains are summarized in Table S2. The moderately virulent P. aeruginosa PAO1 (PAO1) and the highly virulent P. aeruginosa PA14 (PA14) were used as reference strains [19].
Biofilm Production Assay in the Presence of EO
The biofilm production was quantified in vitro by microtiter plate biofilm assay (MTP). A bacterial suspension (about 0.5 OD 600 nm) in the exponential growth phase was diluted into the wells of a sterile 96-well polystyrene flat base plate prefilled with medium containing or not containing each of the EOs listed in Table S3, as previously reported [20]. Each EO was solubilized by adding DMSO, to generate a mother stock solution at 50% v/v concentration. As a control, the bacterial cells were grown in Brain Hearth Infusion broth (BHI, Oxoid, Basingstoke, UK) in the first row of the plate. In the second row the same culture medium was supplemented with each EO at a final concentration of 1.00% v/v. The incubation was performed aerobically overnight at 37 • C. After 18 h of incubation, planktonic cells were gently removed by washing each well three times with double-distilled water, and patted dry in an inverted position. Each well was stained with 0.1% crystal violet for 15 min at room temperature, rinsed twice with double-distilled water, and thoroughly dried to quantify the biofilm formation. The biofilm was subsequently solubilized with 20% (v/v) glacial acetic acid and 80% (v/v) ethanol. The total biomass of biofilm was spectrophotometrically quantified at 590 nm. Each data point is composed of four independent experiments, each performed in at least three replicates.
Essential Oil Chemical Composition Analysis
The EOs are listed in Table S3. They were purchased from Farmalabor srl (Assago, Italy) and their chemical composition was analyzed by gas chromatography-mass spectrometry (GC-MS). The adopted operative conditions followed Papa et al. [16]. Each component was identified by comparing the obtained mass spectra with those reported in the Nist 02 and Wiley mass spectra libraries. Linear retention indices (LRIs) of each compound were also calculated using a mixture of aliphatic hydrocarbons (C8-C30, Ultrasci Bologna, Bologna, Italy) injected directly into the GC injector. All analyses were repeated twice.
Machine Learning Binary Classification Modeling
All analysis were performed using the Python programming language (version 3.7, https://www.python.org/) [21,22] by executing in-house code in the Jupyter Notebook platform [16,17,20]. The chemical composition of each EO and the microbiological data were imported, subsequently loaded into a Python Pandas dataframe, and pre-processed to the final datasets to obtain the classification models. Scikit-learn (sklearn) [23] and the Pandas [24,25] libraries were used to implement Machine learning (ML) algorithm protocols.
During model development, an unsupervised dimensionality reduction/transformation was performed with principal component analysis (PCA) [26] to extract 60%, 80%, 90%, and 100% of the explained variance (Table S4). Different cut-off values related to the percentage of biofilm reduction/augmentation were used to develop ad hoc models to inspect strong, moderate, and weak biofilm inhibition and biofilm enhancement. In a departure from previous applications, a data augmentation (DA) approach was also implemented herein [27]. The EO dataset was augmented by means of composition random perturbation, while keeping the same bioactivity for each augmented related EO. In particular, for each EO, all the components were randomly modified by adding or subtracting up to 15% to/from each EO component, increasing the number of data rows by 10 (aug10) or 20 (aug20) times. In the case of unbalanced augmentation, for each EO, 10 new "virtual" records were generated (baug10 and baug20 in the table), while for the balanced process, with w being the weight of the EO class, it was augmented w*10 times. Moreover, components represented by an occurrence of 2, 4, or 6 times were therefore eliminated from the training set. The robustness of the final models, as well as during the hyperparameters' tuning, was evaluated by cross-validation (CV).
Due to the high number of considered hyperparameter combinations, the ML modeling strategy was conducted as follows:
1.
A first coarse ML model generation was run with 10 random hyperparameter combination runs from all possible considered combinations (Tables S5 and S6) [28]; 2.
A second level of investigation was run with 100 random hyperparameter combination runs from all possible considered combinations (Tables S6 and S7) to select the optimal DA settings; 3.
A pre-final level was run with 1000 random hyperparameter combinations to check for protocol correctness, while extracting statistical coefficients for preliminary model evaluation; 4.
A final hyperparameter combination selection was performed by running 10,000 random combinations; 5.
The best model was finally further investigated with 1000 runs of DA perturbations, and the top scored model was used to deeply analyze the data.
Linear and non-linear ML classification algorithms were used to develop different models: random forest (rf), logistic regression (lr), support vector (sv), gradient bosting (gb), decision tree (dt), and k nearest neighbors (knn) as implemented in sklearn. The accuracy (ACC), F1 score, and Matthews correlation coefficient (MCC) were used to numerically and graphically evaluate the binary classification models. The importance of each chemical component present in EOs was independently evaluated through the "feature importance" (FI) and partial dependence (PD) [29] methods, as implemented in the Skater python library [30,31].
Models were validated by leave-some-out CV by means of five groups using the stratified K-fold method monitoring the average value of MCC obtained from 50 random CV iterations [15,32]. The selection of the final models was based on the MCC values.
Biofilm Production Modulation by EOs
The EOs' ability to modulate P. aeruginosa biofilm production was evaluated at a concentration of 1.00 v/v % on the basis of a previous report [17]. The antimicrobial activity of the 61 EOs listed in Table S3 was evaluated, and the results are reported in Table S8. Inactive EOs were investigated for their ability to modulate biofilm production. Biofilm production was compared to that of untreated bacteria (Table 1). Table 1. Effect of EO on biofilm formation. Percentage of bacterial biofilm formation in the presence of each EO listed in Table S3 at a concentration of 1.00% v/v relative to untreated bacteria. Each data point is composed of four independent experiments, each performed with at least three replicates. NA: not applicable, being EO antimicrobial at tested concentration for this strain. At the concentration tested, the EO was antimicrobial, and consequently the biofilm modulation was not evaluated.
Essential Oil Chemical Composition
The chemical compositions of the 61 EOs have already been reported as described in reference [16], and they are also reported in the Supplementary Material (Table S9).
Datasets
Considering the antimicrobial activity data (Table S8), the biofilm production investigations (Table 1), and the eight P. aeruginosa strains, a total of eight different initial datasets were loaded into a Pandas dataframe. Each dataset was composed of a data matrix of 61 rows (EO1-EO61, samples listed in Table S3) and 240 columns (one bioactivity and 239 chemical components). To evaluate the underdevelopment of the ML model's ability to discriminate between biofilm-inhibiting or biofilm-stimulating EOs, the biological data were binarized (partitioned into two classes) using different percentages of the biofilm production threshold value, SM. For all the strains used, threshold values of 40% (strong biofilm inhibition) and 120% (strong biofilm stimulation) were selected.
For completeness, moderate biofilm inhibition (threshold of 80%) and a direct classification of biofilm inhibitors and enhancers (threshold of 100%) were also taken into consideration, and the results are reported in the Supplementary Material. As the antimicrobial data were too unbalanced, no tentative work was conducted in developing ML models.
Classification Models
To avoid too many unbalanced datasets, the modeling was restricted to binarized data showing, at a maximum, a ratio of 10% ÷ 90% (or 90% ÷ 10%) data distribution, thus allowing the development of 27 models out of the 32 possible combinations (eight strains by four thresholds).
Classification modeling at 40% and 120% thresholds were carried out with six different ML algorithms (rf, gb, sv, lr, dt and knn) using the introduced datasets. Initial classification models were built using the same protocol reported in reference [16], but, unfortunately, statistically acceptable models (MCC values greater than 0.4) were obtained only for two strain/threshold combinations (Table S10). Similarly, only a few weak models were obtained for 80% and 100% threshold values (Table S11). Recently, DA has been reported as a useful tool to develop ML models suffering from either an insufficient amount of data or the presence of noisy experimental data [33]. Despite the intrinsic power of ML, the latter conditions can lead to poor models as the available data do not cover the possible range of applications, such as EOs chemical composition variability. Therefore, DA was implemented herein in a new strategy to develop ML models (see Materials and Methods). Classification models were built with a number of latent variables corresponding to 60%, 80%, 90%, and 100% of the whole chemical components' variance extracted by PCA. Moreover, to avoid the development of models driven by poorly represented components, those components with occurrences lower than 2, 4, and 6 were systematically removed from the training set. Hyperparameter optimization was carried out with a wide range of settings, leading from thousands to billions of combinations (Tables S5 and S7). Therefore, to speed up the calculations, a random search was used in place of the most common and exhaustive grid search. Random search hyperparamenters' optimization was proved, having a probability of 95% of finding a combination of parameters within the optimal 5% with only 60 iterations [28]. Herein, the procedure described in the Material and Methods section led to the elaboration of more than three quarters of a million models (Table S12) to seek the best combination of settings (DA and hyperparameters) to define eleven final ML models ( Table 2). The initial DA and hyperparameter optimization was run with only 10 iterations and with coarse settings (Table S5) leading to the generation of 2880 models for each of the 11 datasets of Table 2. For each dataset the top 3 models were selected leading to select the 33 preliminary ML models P1-P33 with cross-validated MCC values ranging from 0.34 to 0.78 (Tables S13 and S14). Then, the P1-P33 models were subjected to a further 100 iterations to select 11 models in which the DA settings (Tables S4 and S16) were finally selected, leading to the intermediate models I100_1-I100_11 characterized by MCC values in the 0.47-0.88 range (Table S15). A third round of hyperparameter optimization was performed with 1000 random iterations while keeping the models' I100_1-I100_11 DA settings, furnishing models I1000_1-I1000_11 (Tables S17 and S18) which were optimized to the pre-final ML models (PF1-PF27) through a further 10,000 random iterations. Interestingly, models PF1-PF11 were characterized by the same range MCC values of models I1000_1-I1000_11 and models I100_1-I100_11, thus indicating a sort of convergence being reached for the optimal hyperparameter selection (Tables S19 and S20). The models PF1-PF11 were then subjected to 100 rounds of iteration of random DA with the DA settings and hyperparameters selected using the associate models I100_1-I100_11 and PF1-PF11 themselves, respectively. The top-scoring DA final models F1-F11 were then selected, and the associated MCC, ACC, and F1 values calculated ( Table 2). Models F1-F11 were finally analyzed through FI and PD values and plots to investigate the most important chemical components likely responsible for biofilm modulation (FIs) and to seek their statistical responsibility in each model. For completeness, the same procedures were applied using threshold values of 80% and 100% (Table S21).
Chemical Components Importance and Partial Dependences
Chemical component importance was evaluated through FIs and PDs. Each FI indicates a sort of absolute correlation coefficient for each of the chemical components (Figures S1-S13), while the associated PD gives its negative, positive, or no influence. PDs' positive or negative trends were evaluated through the Spearman correlation (SP) coefficient. The SP values were used to correct the corresponding FI into positive or negative weighted FIs (WFIs) and plotted. To reduce useless redundant values, only the top 10 and lowest 10 WFIs values were inspected (Figures 1 and 2). The analysis of the WFI values led to the association of the overall effect on biofilm inhibition or stimulation for each chemical component (Table 3). Weighted feature importance (WFI) plot for models F1 to F6 obtained on the dataset binarized at 40% biofilm inhibition. Positive bars are associated with inhibition of biofilm production, whereas negative bars are associated with augmented biofilm production. Only the 10 highest (antibiofilm) and 10 lowest (pro-biofilm) values are displayed.
Figure 2.
Weighted feature importance (WFI) plot for models F7 to F11 obtained on the dataset binarized at 120% biofilm inhibition. Positive bars are associated with inhibition of biofilm production, whereas negative bars are associated with augmented biofilm production. Only the 10 highest (anti-biofilm) and 10 lowest (pro-biofilm) values are displayed.
Figure 2.
Weighted feature importance (WFI) plot for models F7 to F11 obtained on the dataset binarized at 120% biofilm inhibition. Positive bars are associated with inhibition of biofilm production, whereas negative bars are associated with augmented biofilm production. Only the 10 highest (antibiofilm) and 10 lowest (pro-biofilm) values are displayed.
Chemical Components Importance and Partial Dependences at 40% Biofilm Production Threshold Value
At a 40% biofilm production threshold value, good MCC, ACC, and F1 values were obtained for six out of the eight P. aeruginosa strains (models F1-F6, Table 2 and Figure 1). In particular, linalool, listed in the top 30 most frequent EOs' components with a percentage of presence of about 60% (Table S22), proved to be the chemical component most likely to be involved in strong biofilm production inhibition as identified in four out of six ML models (22P, 25P, 27P, 39P). Other compounds that seem to be important for a strong biofilm reduction are eucalyptol, linalyl anthranilate, geranyl acetate, bornyl acetate, cis-geraniol, sabinene, and cis-3-pinanone. Differently from linalool, these compounds are associated with the inhibition of biofilm production for one, two, or three strains. All together, the nine components might ensure a wide spectrum against the 22P, 25P, 27P, 37P, and 39P isolated strains. Interestingly, linalool and geranyl acetate are two of the most abundant components in EO54 and, in agreement with the above analysis, this EO showed a strong biofilm reduction with an average percentage of biofilm production as low as 31% against the 22P, 25P, 27P, 37P, and 39P isolated strains. Indeed, linalool was present at different percentages in seven of the eight more potent biofilm-reducing EOs (EO10, EO11, EO24, EO44, EO46, EO53, and EO54, each composition reported in Table S9), combined mainly with eucalyptol and geranyl acetate, likely acting in a synergistic way. Interestingly βcaryophyllene, α-pinene, limonene, and p-cymene were indicated as important to decrease the biofilm production for different strains, while this had a negative impact on EOs' biofilm inhibition for the other strains (Table 3). In contrast, β-pinene and carvacrol were found to exert only negative modulation on biofilm inhibition.
Chemical Components Importance and Partial Dependences at a 120% Biofilm Production Threshold Value
As seen for the threshold value of 40%, at 120%, ML models (F6-F11, Table 2, and Figure 2) with MCC acceptable values were obtained for only five out of eight strains (PAO1, 25P, 26P, 27P, and 39P). Eucalyptol and o-cymene were the components calculated as likely to be responsible for slowing down biofilm production in PAO1, while thymol, p-cymene, citronellal, and carvacrol were mainly found as compounds possibly important for biofilm production stimulation. The balancing compounds for biofilm production enhancement were indicated to be linalool, linalyl anthranilate, limonene, and α-pinene.
Discussion
Biofilm represents the strongest form of phenotypical resistance to the host immune defenses and antibacterial drugs operated by bacteria. It plays a pivotal role in the chronicization of many infections, including lung infections as in CF patients. The identification of new compounds able to interfere with biofilm development could lead to the removal of a primary cause of the persistence of infections.
In previous reports, it has been demonstrated that EOs can exert either antibacterial [15,17,[34][35][36][37][38][39][40][41][42] or biofilm modulation effects [15][16][17]20,[42][43][44][45][46][47][48]. As a continuation of a previously reported screen for antibacterial and antibiofilm EOs [15][16][17]20,42], herein, 61 previously investigated commercial samples have been evaluated for their abilities to modulate the biofilm formation of six P. aeruginosa clinical strains (22P, 25P, 26P, 27P, 37P, and 39P) in comparison with the reference strains PAO1 and PA14. Except for a few samples, the EOs tested at a concentration of 1.00% v/v showed a wide variability in either positively or negatively modulating bacterial biofilm production. A biofilm is continuously in equilibrium between accumulation and disruption, being subjected to a wide array of intracellular and extracellular factors. Therefore, it is not surprising that the same EO, that is a complex mixture of many chemical compounds (molecules), may act synergistically or anti-synergistically in stimulating or inhibiting biofilm development. The application of ML algorithms led to models that allowed the identification of the chemical compounds most related to strong biofilm growth inhibition. In particular, linalool (and to a lesser extent eucalyptol, linalyl anthranilate, geranyl acetate, bornyl acetate, cis-geraniol, sabinene, and cis-3-pinanone) is indicated as the most important component endowing EOs with a strong antibiofilm potency. In agreement with previous reports on several chemical constituents of the same EOs [16,17], it could be speculated that eucalyptol and linalool could be listed as common chemical compounds that reduce biofilms in both S. aureus and P. aeruginosa reference and clinical isolates strains. Indeed, Karuppia and coworkers, and Kifer and coworkers in two independent reports demonstrate that eucalyptol plays an antibiofilm role in S. aureus and P. aeruginosa [49,50], while linalool was independently pointed to by Lahiri and Kerekes as an important regulator of S. aureus and P. aeruginosa biofilm formation [51,52].
Regarding the biofilm enhancement driven by our 61 tested EOs, thymol, p-cymene, citronellal, and carvacrol were indicated by the ML models as those compounds important for biofilm production stimulation. In the face of our experimental evidence, a literature survey on Scopus (www.scopus.com, accessed on 1 March 2022) showed almost no reports on small molecules' or EOs' abilities to increase biofilm production.
In this regard, 89 EOs extracted from Mediterranean plants previously screened for their biofilm modulation capability in P. aeruginosa PAO1 [15] and in four Staphylococcus strains [20] showed their abilities in stimulating biofilm production. The analysis of their composition by means of ML methods did highlight the important role of a few chemical compounds in modulating biofilm production. Nevertheless, the overall chemical compounds of the studied EOs were not overlapping with those investigated herein and therefore different conclusions were drawn. Interestingly, for sheer speculation, in previous published reports, limonene was indicated as a potential key molecule that, due to its lipophilic nature, could likely exert some gate role for different either anti-biofilm or probiofilm compounds. Herein, limonene and other hydrophobic components (α-pinene and p-cymene) seem to be confirmed to serve as enhancers (positively or negatively) for other components.
In spite of reports supporting the above hypothesis on biofilm inhibition [12,[49][50][51], further investigations on ad hoc selected EOs or their isolated chemical compounds are required to confirm the role of single molecules and their synergistic or anti-synergistic effects.
In conclusion, in this study, according to previously published articles, the role of EOs and their chemical components is less obscure and ML algorithms have further confirmed their potential as valuable tools to shed light on EOs' likely mechanism of activity. Furthermore, herein, the DA application proved to be a valid method to build robust models, when classical ML application failed. In particular, DA application seems particularly suitable for EOs, which are always critical for their scarce standardizability by chemists and medicinal chemists' communities. As herein applied, the DA considers the composition variability of EOs obtained from the same plants, and also the intrinsic low ratio stability due the different and high volatility associated to each compound.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/microorganisms10050887/s1, Table S1. Qualitative descriptors used for the unsupervised machine learning clusterization of P. aeruginosa strains. Table S2. Phenotypical and genotypical characterization of 6 representative strains of P. aeruginosa. Table S3. Essential oil IDs and associated plant names. Table S4. List of systematic DA settings varied during ML hyperparameter optimization. Table S5. List of hyperparameter settings used for the preliminary ML models through random search optimization. Table S6. List of weight for the class_weight hyperparamenters in Table S6. Data are presented as python dictionaries. Table S7. List of hyperparameter settings used for the models' refinement through random search optimization. Table S8. Antimicrobial activity of EOs listed in Table S1, on representative clinical and reference strains of P. aeruginosa. Table S9. Compositions of the 61 essential oils used in the study. Table S10. Preliminary models developed with the procedure described in reference [16]. Table S11. Preliminary models developed for thresholds at 80% and 100% biofilm modulation [16]. Table S12. Number of models evaluated during the ML optimization process. NA means no models were developed for the strain/threshold combination due to the low number of active or inactive samples. Table S13. Preliminary models P1-P33 obtained with the combination of DA and random search hyperparameter optimization. Table S14. Preliminary models P1-P33s' associated hyperparamenters. Table S15. Intermediate ML models I100_1-I100_11 with the data augmentation and 100 random iterations. Table S16. Intermediate ML models I100_1-I100_11 associated hyperparamenters as listed in Table S15. Table S17. Intermediate ML models with the data augmentation setting selected from models I100_1-I100_11 and 1000 random iterations. Table S18. Intermediate models I1000_1-I1000_11 associated hyperparamenters as listed in Table S17. Table S19. Final models PF1-PF11 with the data augmentation setting selected from models I1000_1-I1000_11 and 10,000 random iterations to seek for the best hyperparameters. Table S20. Models hyperparamenters as listed in Table S19. Table S21. Optimized final models obtained with 100 random iterations of data augmentation at threshold values of 80% and 100%. Table S22. Occurrences of the EOs' chemical components. Only the most frequent compounds are listed. Figure S1. Feature importance for model F1 (see main text Table 1). The top 20 components are displayed. Figure S2. Feature importance for model F2 (see main text Table 1). The top 20 components are displayed. Figure S3. Feature importance for model F3 (see main text Table 1). The top 20 components are displayed. Figure S4. Feature importance for model F4 (see main text Table 1). The top 20 components are displayed. Figure S5. Feature importance for model F5 (see main text Table 1). The top 20 components are displayed. Figure S6. Feature importance for model F6 (see main text Table 1). The top 20 components are displayed. Figure S7. Feature importance for model F7 (see main text Table 1). The top 20 components are displayed. Figure S8. Feature importance for model F8 (see main text Table 1). The top 20 components are displayed. Figure S9. Feature importance for model F9 (see main text Table 1). The top 20 components are displayed. Figure S10. Feature importance for model F10 (see main text Table 1). The top 20 components are displayed. Figure S11. Feature importance for model F11 (see main text Table 1). The top 20 components are displayed. Figure S12. Normalized feature importances for the final models F1-F6 developed at a threshold value of 40% (see main text Table 1). The top 20 components are displayed. Figure S13. Normalized feature importances for the final models F23-F27 developed at a threshold value of 120% (see main text Table 1). The top 20 components are displayed. Informed Consent Statement: Informed consent was obtained from all subjects involved in this study. | 2022-04-27T15:08:12.716Z | 2022-04-24T00:00:00.000 | {
"year": 2022,
"sha1": "dde580375d727233784220414c6256e3c5f97381",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/5/887/pdf?version=1650785456",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b411579cf788b0518270a8c9d7066c87c32a7129",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
254685917 | pes2o/s2orc | v3-fos-license | Runtime Monitoring for Out-of-Distribution Detection in Object Detection Neural Networks
Runtime monitoring provides a more realistic and applicable alternative to verification in the setting of real neural networks used in industry. It is particularly useful for detecting out-of-distribution (OOD) inputs, for which the network was not trained and can yield erroneous results. We extend a runtime-monitoring approach previously proposed for classification networks to perception systems capable of identification and localization of multiple objects. Furthermore, we analyze its adequacy experimentally on different kinds of OOD settings, documenting the overall efficacy of our approach.
Introduction
Neural Networks (NNs) can be trained to solve complex problems with very high accuracy. Consequently, there is a high demand to deploy them in various settings, many of which are also safety critical. In order to guarantee their safe operation, various verification techniques are being developed [3,10,16,21,32,35]. Unfortunately, despite the enormous effort, verification of NN of realistic industrial sizes is not within sight [1]. Therefore, more lightweight techniques, less depending on the size of the NN, are needed these days to provide some assurance of safety. In particular, runtime monitoring replaces checking correctness universally on all inputs by following the current input only and raising an alarm, whenever the safety of operation might be violated.
Due to omnipresent abundance of data, NN can typically be trained well on these given inputs. However, they may work incorrectly particularly on inputs significantly different from the training data. Whenever such an Out-Of-Distribution (OOD) input occurs, it is desirable to raise an alarm since there is much less trust in a correct decision of the NN on this input. OOD inputs may be, for instance, pictures containing previously unseen objects or with noise stemming from the sensors or from an adversary.
In this paper, we provide a technique to efficiently detect such OOD inputs for the industrially relevant task of object detection, for which objects in an input image need to This project has received funding from the European Union's Horizon 2020 Hi-Drive project under grant agreement No. 101006664 and the project Audi Verifiable AI. be localized and classified. We consider PolyYolo [20] as the object detection system of choice as it encompasses a very complex architecture like complex perception systems used in development of advanced driver assistance systems (ADAS) and autonomous driving functions. Our approach builds upon a recent runtime-monitoring technique [14] for efficient monitoring of classification networks. As we consider object detection networks, the setting is technically different: the inputs are of a different type and, apart from classifying objects, their bounding boxes are to be produced. Even more importantly, the number of objects in the picture to be identified can now be more than 1 (often reaching dozens). As a result, questions arise how to apply the technique in this context, so that the efficiency and adequacy of the monitor is retained or even improved.
Our contribution can be summarized as follows. We (i) propose how to extend the technique to this new setting (in Section 3.1), (ii) improve and automate the detection mechanism (in Section 3.2), and (iii) provide experiments on industrial benchmarks, concluding the efficacy of our approach (in Section 4). In particular, our experiments focus on OOD due to pictures (i) from other sources, (ii) affected by random noise, e.g., from sensors, and (iii) affected by adversarial noise due to an FGSM attack [13]. On the methodological side, we leverage non-conformity measures to automate threshold setting for OOD detection. Altogether, we extend the white-box monitoring approach [14] to object detection systems more suited for real-world applications.
Related Work
In this paper we focus on OOD detection when considering the neural network as a white box. OOD detection based on the activation values of neurons observed at runtime is extensively exploited in the state of the art [2,4,14,17,25,31]. In particular, Hashemi et al. [14] calculate the class-specific expectation values of all layer's neurons based on training data to abstract the In-Distribution (ID) behavior of the network. On top of that, they calculate the activations' confidence interval per class. At runtime if the network predicts a class but the activation values are not within the class-specific confidence interval, the result is declared as OOD as it does not match the expected ID behavior represented by the interval. Sastry et al. [31] also monitor the network's activations during training. With this information, they calculate class-specific Gram matrices allowing them to detect deviations between the values within the matrix and the predicted class during the execution. Henzinger et al. [17] use interval abstraction [6] where for each neuron an interval set is built which includes the neuron's activation values recorded while executing the training dataset. They utilize these constructed abstractions to identify novel inputs at runtime. In a follow-up work, Lukina et al. [25] calculated distance functions to quantitatively measure the discrepancy between novel and in-distribution samples. Other directions of work for OOD detection involve generative models to measure the distance between the original image and the generated sample or monitoring of the last layer, e.g., [24,33].
Hendrycks et al. present different benchmarks for OOD detection in multi-class, multi-label and segmentation settings and apply baseline methods [15]. They show that the MaxLogit monitor works well on all those problems. However, it is not directly applicable to the problem of object detection as in the other settings either the image or each pixel separately is assigned to classes. In the case of object detection, some parts of the image cannot be assigned meaningfully.
While all of the above techniques focus on classification or segmentation networks, we are only aware of few other approaches focusing on object detection neural networks. Du et al. [8] introduced a method for monitoring object detection systems by distilling unknown OOD objects from the training data and then training the object detector from scratch in combination with an uncertainty regularization branch. Similarly, [9] train an uncertainty branch by artificially synthesizing outliers from the feature space of the NN. Consequently, the tools are not applicable to the frozen graph of a trained model. Unfortunately, this restriction beats the purpose of using (and monitoring) a given trained network.
We refer the reader to [34] for a detailed overview on other monitoring approaches.
Neural Networks
Neural Networks (NNs) are learning components which are often applied to complex tasks especially when it is hard to directly find algorithmic solutions. Examples of such tasks are classification, where the type of object in an image should be predicted, and object detection. In the latter case, images can contain several different objects at different locations. The NN identifies the different objects in the image, assigns them to classes and computes bounding boxes, usually of rectangular form, surrounding the object.
In general, a NN consists of several consecutive layers 1, ..., L containing computation units called neurons. The neurons receive their input as a sum from weighted connections to neurons in the previous layer and apply a usually non-linear activation function σ to their input. The result of this computation is called the activation value h of the neuron. More formally, the behavior of a neuron j in layer l + 1 with activation function σ l+1 and incoming weights w ij from neuron i ∈ N l from layer l with neurons N l can be described as follows for an input x: The activation values for neurons at layer 1, which is called the input layer, are defined as the input x: The last layer is the output layer. The layers in between are called hidden layers. An exemplary NN is shown in Figure 1.
The basic network architecture can be extended with different types of layers. Examples are convolutional, batch normalization and leaky ReLU layers. A convolutional layer takes its input as a 2-or 3-dimensional matrix and moves another matrix called the filter over the input. The input values are multiplied by the corresponding value in the filter to obtain the output. The goal of a batch normalization layer is to normalize the activation values of the neurons. Therefore, the mean and standard deviation are learned during training. During inference, the batch normalization layer behaves like a layer without an activation function as it only normalizes the activation values according to the learned parameters. The leaky ReLU layer takes only one input without weights and performs the following activation function: A more detailed introduction to NNs and different layer types can be found in [28].
input layer hidden layers output layer
Gaussian-Based White-Box Monitoring
In [14] Hashemi at al. introduced Gaussian-based OOD detection for a classification NN. In this setting, the NN is trained to assign an image to one of the classes in C = {c 1 , ..., c n L }. The underlying assumption is that neurons behave similar for objects of a particular class. Furthermore, neuron activation values are assumed to follow a Gaussian distribution. Therefore, the neuron activation values h i are recorded for each monitored neuron i ∈ M for a set of monitored neurons M and for each sample of the training data X = {x 1 , .., x m } leading to a vector r i with r i j = h i (x j ). The vector is then separated by class to r i c for c ∈ C. In the next step, the mean and standard deviation µ i,c , σ i,c are calculated for the neurons dependent on the classes. Due to assumption of a Gaussian distribution, 95% of the samples are expected to fall into the During inference, a new sample x is fed into the NN, a class c is predicted and the neuron activation values are recorded. The monitor checks if the activation values fall within the previously computed range of values. More formally: However, the paper [14] showed that rarely the activation values of all neurons fall within the desired range. Due to the selection of bounds for the interval to contain 95% of the neuron activation values of the training data, even examples utilized to calculate the bounds may not fulfill the above condition. Therefore, the condition is weakened to only require a fixed percentage of neurons to be inside the bounds. This threshold was set manually in the paper with the goal of obtaining similar false alarm rates as Henzinger et al. [17].
Inductive Conformal Anomaly Detection
In our work we leverage Inductive Conformal Anomaly Detection (ICAD) which was introduced in [23]. ICAD extends conformal anomaly detection [22]. The idea is to predict if a new sample x m+1 is similar to a given training set X = {x 1 , ..., x m }. For this purpose, a nonconformity measure A is introduced. This function takes as input the training set and a new sample for which to compute the nonconformity score and returns a real-valued measure of the distance of x m+1 to the samples of X. Afterwards, the p-value is calculated based on the nonconformity measure. The p-value for sample x m+1 is calculated by A low p-value hints to a non-conformal sample x m+1 . In general, this approach is inefficient as it requires the repeated computation of the nonconformity score for the entire training set X. An improvement was introduced in [23]. The training set is split into a proper training set X p = {x 1 , ..., x k } and a calibration set X c = {x k+1 , ..., x m } with k < m. In the first step, the nonconformity measure A is applied to samples of the calibration set based on the proper training set. For the new test sample x m+1 the p-value is then computed in comparison to the calibration set:
Monitoring Algorithm
In this paper we propose a monitoring algorithm which extends the Gaussian based monitoring from [14] to object detection NNs and embeds it into the framework of ICAD.
Extension to Object Detection Neural Networks
The approach presented by Hashemi et al. [14] relies on the distinction of images by different classes as a separate interval for the neuron activation values is computed for each of the classes. However, images fed to an object detection network can contain several objects of different classes at different locations at the same time. When computing the intervals based on the classes contained in the images, one image could be relevant for several of those intervals. For example, an image containing a car and a pedestrian would contribute to the intervals for both classes. However, the pedestrian could only make up a small part of the input image leading to only a small fraction of neurons being influenced by the object. Consequently, neurons not related to the person are considered as relevant for the class intervals. Furthermore, the position of pedestrians throughout different images can shift and the neurons related to the pedestrian change accordingly. Consequently, the class related intervals would mostly consists of values from neurons that are not related to objects of the class. In addition, this approach increases the runtime at inference time. A previously unseen image would need to be checked against an interval for each class it contains an object of. In the worst case this could result in the total number of classes. As most of the values used for constructing the intervals are similar since they are not related to the particular object, the computations are also highly redundant.
To resolve both issues we discard the class information. This is supported by the observation that images are generally recorded in similar areas and therefore the general setting of a street is contained in all of them. The only changes are due to the objects and are locally bounded to their locations. The approach reduces the runtime to only one check per image and discards redundant computations. In total, we monitor the following condition discarding the class information:
Embedding into the Framework of Inductive Conformal Anomaly Detection
In the next step we improve the manual threshold setting from [14] for the number of neurons that need to fall inside the expected interval. We propose to use ICAD for this purpose. Therefore, we divide the training set into the proper training set X p and the calibration set X c and define the nonconformity measure A to be the number of neurons falling outside the range [µ i,p − k σ i,p , µ i,l,p + k σ i,p ] computed based on the proper training set X p . We capture the number of neurons outside the interval rather than the ones inside as the nonconformity measure is expected to grow for OOD data. More formally with M as the set of monitored neurons, usually all neurons of a particular layer and µ i,p , σ i,p the bounds computed as described in the last section based on the set X p as training set: Afterwards, the p-value is calculated as described in equation 4. The threshold for the p-values is then set manually based on the requirements of the use case as there is a trade-off between the false alarm rate and the detection rate. For example, a high threshold for the p-vale leads to a low number of wrongly classified OOD examples, but the number of ID data classified as OOD will also rise as even some of the images from the calibration set are classified as OOD. Overall, the threshold setting is now closely related to the calibration set instead of the abstract metric of number of neurons inside the bounds.
Experiments
Experiments were performed on PolyYolo [20] which is based on the famous architecture called YOLO (You Only Look Once) [29]. YOLO was introduced in 2016 from Redmon et al. and afterwards continuously extended to improve the performance. For our work we decided to focus on PolyYolo [20] as it improves YOLOv3 [30] while also reducing the size of the network. The architectue can be seen in Figure 2. PolyYolo consists of three main building blocks. A convolutional set contains a convolutional layer and a batch normalization layer followed by leaky ReLU layer. A Squeeze-and-Excitation (SE) block [19] contains a Global Average Pooling layer to reduce the size of each channel to 1 followed by a reshape layer, a dense layer, a leaky ReLU layer and a dense layer. The output of this sequence is meant to represent the importance of each channel compared to the others. Therefore, the last layer of the block multiplies the input with the result of the sequence to scale the input. The residual block with SE then contains two consecutive convolutional sets followed by a SE block. The result is added to the input. The backbone of PolyYolo consists of several iterations of convolutional sets followed by residual blocks with SE as shown in figure 2. In between, there are three skip-connections to the neck. The neck uses upsampling to scale all results of the skip-connections to the same size and adds them up with intermediate convolutional sets. After all connections are added to one feature map, four convolutional sets are applied. The final layer is a convolutional layer. We monitored layers from the last convolutional set of the network as those are the last hidden layers and Hashemi et al. [14] discovered that a monitor based on the last layers of a NN lead to more accurate results. Namely we focus on the last batch normalization and leaky ReLU layer. As ID data we used Cityscapes [5] which is the data set PolyYolo was trained on. [20] and shows the architecture of PolyYolo. White blocks represent convolutional sets, light pink indicates residual blocks with SE and dark pink shows the upsampling.
We computed intervals for the neuron activation values based on 500 training images of the Cityscapes data set and the calibration set consists of 100 test images of Cityscapes. In a first step, we investigated the size of the calibration set. Figure 3 shows the importance of including images with different features. The x-axis shows the interval of p-values considered for the bar while the y-axis shows the number of images resulting in a p-value within this interval. For a calibration set of size 20, many samples obtain a p-value in the interval (15,20]. For a large calibration set, the peaks in the graph are flattened. However, it is also noticeable that some elements of X c are of more importance to the test data than others resulting in peaks as they separate the test data. Small bars in the graph are the result of elements of X c that do not contribute a value for the nonconformity measure with huge difference to their neighbors. Therefore, samples from the test data that have a higher nonconformity score than these images also have a larger nonconformity score than other samples of X c . A more advanced selection strat-egy for the calibration set could reduce this effect. To this end, we therefore fix the size of the calibration set to 100 images. [14] we obtained OOD data by using a different data set, namely KITTI [11], which also contains images captured by a vehicle driving in a German city. However, all randomly selected 100 images from the KITTI data set resulted in a p-value of 0 which is indicated with the red bar. Therefore, we generated OOD examples from the 250 Cityscapes images we used as test data by adding Gaussian noise, as noise can be used to fool a neural network [7,18,26]. Our implementation is based on [27]. We considered additional Gaussian noise with mean 0 and variance 0.02, 0.04 or 0.06. The noise is barely detectable for humans (see Figure 5) but leads to sever faults in PolyYolo. As indicated in Figure 5, a noise of variance 0.02 already leads to a huge decrease in detection rate and for a larger variance no objects were detected correctly. In Figure 4 the behavior of the p-values for images with additional noise is portrayed. The noises of variance 0.02, 0.04 and 0.06 are depicted by cyan, green and orange bars, respectively. For better readability, some bars were shortened. It can be seen that the p-values decrease when the severity of the noise increases. This trade off can be considered when selecting a threshold value at runtime in order to decide when to raise an alarm. For the evaluation of the monitor in a practical setting we set the threshold for pvalues to 5% meaning that a sample is classified as ID if it has a higher p-value than at least 5% of the calibration set. This decision was influenced by Figure 4. Most samples perturbed with a severe Gaussian noise and only a small portion of ID are classified as OOD by this threshold. The experiments were carried out on 100 previously unseen images of the Cityscapes data set as well as 100 images of KITTI and A2D2 [12]. Perturbations were applied to the Cityscapes images. In addition to Gaussian noise we used impulse noise, also called salt-and-pepper noise, and the Fast Gradient Sign Method (FGSM) attack [13]. The impulse noise manifests as white and black pixels in the image and the strength is influenced by the random parameter. Our implementation is again based on [27]. The FGSM attack corrupts the input pixels based on the gradient of the output. The gradient is used to calculate a mask of changes which is then added to the input image. The mask is usually multiplied with a small factor to make the attack less obvious to humans. Examples of the perturbations can be seen in Figure 5.
Results of the experiment are shown in Table 1. The number of ID data classified as OOD data lies within the range of expected values due to the setting of the threshold to 5%. Both layers detect Gaussian noise with variance of 0.04 and 0.06 while a variance of 0.02 can fool the approach. However, this noise is not as critical as large objects are still detected from the network (see Figure 5 for an example). For the attacked images, the leaky ReLU layer was more precise. This is presumably due to the fact that in the FGSM images pixels were purposely changed to make a large impact on the output of the network. The leaky ReLU layer is a successor of the batch normalization layer and the last layer before the output layer. Therefore, the changes should reflect more. Furthermore, it is noticeable that all images taken from different data sets were classified correctly.
Conclusion and Future Work
In this work we developed a tool to detect OOD images at runtime for 2D object detection systems. The idea was based on Gaussian monitoring of the neuron activation patterns. We additionally embedded the method into the framework of inductive conformal anomaly detection to receive a quantitative measure of difference between the training set and new samples. Experiments visualizing the p-values were carried out. The proposed idea can be extended in several ways. First of all, the selection of images for the calibration set can be improved as we observed a difference in importance for the randomly selected images. In addition, the selection of monitored layers requires further evaluation. We only considered the last two hidden layers of the network. However, the architecture of PolyYolo contains staircase upsampling with skip connections. Activation values obtained from these connections are a natural way to extend the monitoring approach to also take intermediate neuron values into consideration. Furthermore, more experiments on other neural network architectures are required in order to generalize the results. For the same reason, different types of perturbations and attacks should be considered for generating OOD data. An extension of the MaxLogit monitor from [15] to the application of object detection with the goal of comparing both monitors is worth to be exploited. | 2022-12-16T06:41:52.781Z | 2022-12-15T00:00:00.000 | {
"year": 2022,
"sha1": "ae0165a5484b56f4b1185fea2e9eb7ccfcb9df2e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ae0165a5484b56f4b1185fea2e9eb7ccfcb9df2e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225285523 | pes2o/s2orc | v3-fos-license | Nursing Surge Capacity Strategies for Management of Critically Ill Adults with COVID-19
Background: There is a vital need to develop strategies to improve nursing surge capacity for caring of patients with coronavirus (COVID-19) in critical care settings. COVID-19 has spread rapidly, affecting thousands of patients and hundreds of territories. Hospitals, through anticipation and planning, can serve patients and staff by developing strategies to cope with the complications that a surge of COVID-19 places on the provision of adequate intensive care unit (ICU) nursing staff—both in numbers and in training. Aims: The aim is to provide an evidence-based starting point from which to build expanding staffing models dealing with these additional demands. Design/Method: In order to address and develop nursing surge capacity strategies, a five-member expert panel was formed. Multiple questions directed towards nursing surge capacity strategies were posed by the assembled expert panel. Literature review was conducted through accessing various databases including MEDLINE, CINAHL, Cochrane Central, and EMBASE. All studies were appraised by at least two reviewers independently using the Joanna Briggs Institute JBI Critical Appraisal Tools. Results: The expert panel has issued strategies and recommendation statements. These proposals, supported by evidence-based resources in regard to nursing staff augmentation strategies, have had prior success when implemented during the COVID-19 pandemic. Conclusion: The proposed guidelines are intended to provide a basis for the provision of best practice nursing care during times of diminished intensive care unit (ICU) nursing staff capacity and resources due to a surge in critically ill patients. The recommendations and strategies issued are intended to specifically support critical care nurses incorporating COVID-19 patients. As new knowledge evidence becomes available, updates can be issued and strategies, guidelines and/or policies revised. Relevance to Clinical Practice: Through discussion and condensing research, healthcare professionals can create a starting point from which to synergistically develop strategies to combat crises that a pandemic like COVID-19 produces.
Introduction
The recent viral outbreak initiated from Wuhan, China, has now crossed all borders and has spread into more than 224 countries [1]. The outbreak is caused by a novel strain of coronavirus the hospital levels such as developing triage protocol to reallocate human and medical resources to equitably meet the needs of patients [13,14]. Triage process often starts by inventory of potential ICU resources, such as ventilatory capacity in the hospital, and then follows an algorithm for screening and admission [13]. Periodic patient assessment is necessary to check if there is any change in patients' needs in order to transfer, admit, or discharge patients [14]. Triage protocols may also be developed at a regional level to allow for communication and resource sharing among all hospitals in one region [14]. This strategy gives more opportunities for better utilization of resources [14].
The goal of nursing surge capacity is to find wise ways to augment and extend the hospital workforce; to allocate healthcare resources in an ethical, rational, and organized method to do the greatest good for the greatest possible number of patients [14]. In order to combat the complications that the pandemic threatens to level of care, a decision was made to develop nursing surge capacity recommendations and strategies for management of critically ill patients with COVID-19 in the ICU. The objectives of these strategies are to provide guidance and recommendations in order to help nursing administrators and leaders to prepare for a COVID-19 pandemic in ICU.
Data Sources
The search strategy aimed to find published studies in MEDLINE, CINAHL, Cochrane Central, and EMBASE from December 2019 through March 2020 ( Figure 1). The keywords used were: COVID-19, coronavirus 2019, ICU surge capacity, nursing surge capacity, and strategies. The filters applied included "humans", "last 10 years", and "English language". The unpublished studies were searched in ProQuest and MEDNAR. Nurs. Rep. 2020, 1, FOR PEER REVIEW 3 potential ICU resources, such as ventilatory capacity in the hospital, and then follows an algorithm for screening and admission [13]. Periodic patient assessment is necessary to check if there is any change in patients' needs in order to transfer, admit, or discharge patients [14]. Triage protocols may also be developed at a regional level to allow for communication and resource sharing among all hospitals in one region [14]. This strategy gives more opportunities for better utilization of resources [14]. The goal of nursing surge capacity is to find wise ways to augment and extend the hospital workforce; to allocate healthcare resources in an ethical, rational, and organized method to do the greatest good for the greatest possible number of patients [14]. In order to combat the complications that the pandemic threatens to level of care, a decision was made to develop nursing surge capacity recommendations and strategies for management of critically ill patients with COVID-19 in the ICU. The objectives of these strategies are to provide guidance and recommendations in order to help nursing administrators and leaders to prepare for a COVID-19 pandemic in ICU.
Data Sources
The search strategy aimed to find published studies in MEDLINE, CINAHL, Cochrane Central, and EMBASE from December 2019 through March 2020 ( Figure 1). The keywords used were: COVID-19, coronavirus 2019, ICU surge capacity, nursing surge capacity, and strategies. The filters applied included "humans", "last 10 years", and "English language". The unpublished studies were searched in ProQuest and MEDNAR.
Quality Assessment of Extracted Data
Initially, all titles and abstracts were screened independently by at least two reviewers. All full texts of the studies which passed through the initial stage were retrieved and assessed against the review inclusion criteria in detail. These eligible studies were again appraised by at least two reviewers independently using the Joanna Briggs Institute JBI Critical Appraisal Tools [15]. The JBI appraisal has different checklists to be applied against different study designs. The instrument consists of 10 items that assess the methodological quality of a study and determines the extent to which a study has addressed the possibility of bias in its design, conduct, and analysis. The results of the JBI appraisal have been taken into full account and used to inform the synthesis and interpretation of the results of the recommendations (Figure 1).
Quality Assessment of Extracted Data
Initially, all titles and abstracts were screened independently by at least two reviewers. All full texts of the studies which passed through the initial stage were retrieved and assessed against the review inclusion criteria in detail. These eligible studies were again appraised by at least two reviewers independently using the Joanna Briggs Institute JBI Critical Appraisal Tools [15]. The JBI appraisal has different checklists to be applied against different study designs. The instrument consists of 10 items that assess the methodological quality of a study and determines the extent to which a study has addressed the possibility of bias in its design, conduct, and analysis. The results of the JBI appraisal have been taken into full account and used to inform the synthesis and interpretation of the results of the recommendations (Figure 1).
A total of 220 studies were retrieved. After reading the titles and abstracts, 150 studies were excluded. After reading the full articles, a total of 53 articles were excluded and 17 articles were included which met the inclusion criteria ( Figure 2). All identified publications were collated and fed into Endnote X10 software. The evidence-based strategies issued are to support critical care nurses to manage critical patients in the intensive care unit during the COVID-19 pandemic. Four recommendations and rationales were issued by the expert panel based on evidence.
Nurs. Rep. 2020, 1, FOR PEER REVIEW 4 A total of 220 studies were retrieved. After reading the titles and abstracts, 150 studies were excluded. After reading the full articles, a total of 53 articles were excluded and 17 articles were included which met the inclusion criteria ( Figure 2). All identified publications were collated and fed into Endnote X10 software. The evidence-based strategies issued are to support critical care nurses to manage critical patients in the intensive care unit during the COVID-19 pandemic. Four recommendations and rationales were issued by the expert panel based on evidence.
Recommendation 1: Regular Patient-to-Nurse Ratio
When able to, recommend nursing staffing (1:1 or 1:2) in the ICU during the COVID-19 pandemic to provide high-quality patient care, improve safety, have fewer complications, and better outcomes ( Figure 3). This should be followed until such time that the surge is felt. At that time, progression to Recommendation 2 will be made.
Recommendation 1: Regular Patient-to-Nurse Ratio
When able to, recommend nursing staffing (1:1 or 1:2) in the ICU during the COVID-19 pandemic to provide high-quality patient care, improve safety, have fewer complications, and better outcomes ( Figure 3). This should be followed until such time that the surge is felt. At that time, progression to Recommendation 2 will be made.
Nurs. Rep. 2020, 1, FOR PEER REVIEW 4 A total of 220 studies were retrieved. After reading the titles and abstracts, 150 studies were excluded. After reading the full articles, a total of 53 articles were excluded and 17 articles were included which met the inclusion criteria ( Figure 2). All identified publications were collated and fed into Endnote X10 software. The evidence-based strategies issued are to support critical care nurses to manage critical patients in the intensive care unit during the COVID-19 pandemic. Four recommendations and rationales were issued by the expert panel based on evidence.
Recommendation 1: Regular Patient-to-Nurse Ratio
When able to, recommend nursing staffing (1:1 or 1:2) in the ICU during the COVID-19 pandemic to provide high-quality patient care, improve safety, have fewer complications, and better outcomes ( Figure 3). This should be followed until such time that the surge is felt. At that time, progression to Recommendation 2 will be made.
Rationale
Matching patient needs with adequately trained nurses and maintaining safe patient-to-nurse ratio is essential to ensure the provision of safe and high-quality patient care. As such, nurse staffing ratios in critical care units is an important aspect when planning care [16]. The literature on nursing ratios in ICU has confirmed the relationship between ICU nurse staffing and patient outcomes. The reviewed studies confirm that a higher number of registered nursing staff to patient ratio (1:1 or 1:2) is highly associated with improved patient safety and better outcomes [13]. In the U.S. and Canada, the nurse-to-patient ratio in ICU stays close to (1:1.5) at both time points. Western Europe and Latin America had lower nurse staffing, especially at night, with an overall ratio of (~1:1.8) [17]. Note that this is the preferable situation when applicable or during non-pandemic times.
Additionally, critically ill patients require the care of nurses who have specialized knowledge and skills and who are given enough time to provide that care safely. Appropriate staffing ensures effective pairing of patient/family needs with the assigned nurse's knowledge, skills, and abilities. In fact, evidence confirms that the likelihood of serious complications and mortality rates increase when fewer registered nurses (RNs) are assigned to care for patients [13,18,19]. Similarly, a considerable amount of research indicates healthy work environments and better patient outcomes when a higher percentage of patient care tasks are provided by RNs [20].
Rationale
Most countries that have already been hit hard by COVID-19, attempted to increase the supply of healthcare. Having care directed by trained and experienced ICU nurses is an effective way to provide high-quality care for critically ill patients [21]. However, during crisis times, the number of ICU nurses cannot accommodate a large number of patients. Additional personnel can be identified internally through the scale-back of elective and non-urgent services in the hospital. As elective surgeries are placed on hold, nurses from areas like the Surgical ICU, Endoscopic units, Step-down units, Post Anesthesia Care Unit (PACU), and Pre-Op become available for ICU staffing needs. These nurses should be the first choice to augment ICU staffing and expand ICU beds during pandemics such as COVID-19, as their skills are most readily transferable, thereby having the potential to increase the critical care capacity of the hospital in the safest way possible. To expand the staffing capacity further, hospitals may consider external searching resources to identify and recruit ICU nurses who had transitioned to ambulatory care settings and other nurses from community care settings to support ICU staff during the crisis [7]. Additionally, other qualified medical professionals can be recruited to safely manage the care of mechanically ventilated patients. Anesthesiologists and physicians who have ventilator management experience are potential resources to supplement ICU care teams. With minimal orientation, they can easily support respiratory therapists and nurses to achieve safe ventilatory support to those requiring it [7]. Other potential caregiver support could include students in medical, nursing, and other health education programs who are nearing the end of their studies. Many would be suitable for providing services to patients or helping to respond to public concerns through telephone hotlines [21]. The team is led by an ICU physician who works with a respiratory therapist trained in critical care and 2 ICU nurses who supervised 3 step-down nurses. Each team provides care for 15 patients [12]. The team is led by an ICU physician who works with a respiratory therapist trained in critical care and 2 ICU nurses who supervised 3 step-down nurses. Each team provides care for 15 patients [12]. The team is led by an ICU physician who works with a respiratory therapist trained in critical care and 2 ICU nurses who supervised 3 step-down nurses. Each team provides care for 15 patients [12]. The team is led by an ICU physician who works with a respiratory therapist trained in critical care and 2 ICU nurses who supervised 3 step-down nurses. Each team provides care for 15 patients [12].
In this model, one experienced ICU physician oversees 4 teams composed of ICU physicians, respiratory therapist, and nurses supported by other hospital professionals to take care of 24 patients each [10].
Rationale
To overcome the anticipated shortage of ICU staff during the COVID-19 pandemic, hospitals are recommended to adopt a team-based approach. In the Ontario Health Plan for an Influenza Pandemic Care Team Approach, and the Society of Critical Care Medicine (SCCM) Tiered Staffing Strategy for a Pandemic are recommended models for ICU staff augmentation strategies during pandemics such as COVID-19. Both strategies have similar concepts and applications. They focus on the utilization of non-experienced healthcare workers to work in collaboration (in teams) with experienced staff to increase the capacity of care for critically ill patients. This strategy demonstrated to work effectively in pandemic situations [7,21].
The tiered staffing strategy combines experienced ICU nurses with reassigned hospital nurses. Instead of the regular care delivery model where each ICU nurse provides care for one to two patients (Figure 2), in this strategy, each ICU-trained nurse will supervise and direct other two re-assigned nurses who have useful skills but lack experience in the ICU setting to ultimately provide care for four critically ill patients. ICU physician(s) trained in critical care or those who regularly manage ICU patients will oversee all nurse teams ( Figure 3) [12,13,21,22].
As the situation unfolds, teams can be expanded to care for more patients such as six or eight or more as required. Tiered staffing models are not set standards and each hospital must determine the best combination of staff based on their resources [11,23,24]. Combining experienced and non-experienced ICU-trained nurses will help to ensure adequate levels of care and not overwhelm ICU-trained staff. When implementing the current strategy and combining inexperienced team members, it is recommended to maintain effective communication among the team. This can be achieved through utilizing different ways such as team huddles at the start of each shift and at regular intervals, such as every 4 hours, to discuss team assignments, patient care goals, and red flags that should be reported immediately to the team leader [25]. This will ensure effective communication and allows each team member to discuss his/her patients' needs and get the experts' opinion. If a physical huddle is difficult, virtual huddles can be applied to enhance patients' safety and to keep all team members aware of all updates and changes in the unit [25].
Applications of a Team-Based Approach
The report of the Ontario Health Plan for an Influenza Pandemic presented an example of a tiered strategy and called it Care Team Model (Figure 4). In this model, healthcare workers who have useful skills but lack experience in critical care can work in teams supervised by experienced staff and collectively care for a larger group of patients. In place of an individual specialized nurse caring for one to two patients, a team of mixed experienced nurses provides the care for a group of patients. This is possible because in combination, they have the complete skills set and pertinent experience required to care for expanded patient numbers. In this example, one intensivist can supervise three teams, each composed of one physician, one respiratory therapist and two ICU nurses who supervised three step-down nurses. Each one of the 3 teams will take care of 5 patients and the 3 teams together will provide care to 15 patients [10,11]. The care team model focuses on the provision of care by a team of healthcare workers. Teams would be created with feedback loops and operate under this designated hierarchy and guided by expected job functions and responsibilities. This model has proven to be effective in past emergencies [10,11,15,16].
The SCCM presented an expanded example of the applications of tiered staffing strategy for pandemics with a larger number of healthcare workers and larger capacity for care provision ( Figure 5). It suggests that one ICU-experienced physician oversees the care of 4 teams, and each team provides care for 24 patients. Each one of these teams is supervised by an ICU physician or non-ICU physician such as an anesthesiologist, pulmonologist, surgeon, or hospitalist, who does not frequently perform ICU care but has some ICU training. Each team is composed of an experienced respiratory therapist and other clinicians such as physicians, nurse anesthetists, or pharmacists who are experienced in managing ventilated patients. There are four ICU nurses in each team; each nurse is responsible for supervising the other three re-assigned nurses and each re-assigned nurse will care for two patients. Ultimately each team will provide care for 24 patients and the four teams together will provide care for 96 patients [16]. This strategy is an alternative strategy that may be implemented as ICU-trained nurses fall ill and ICU-trained nurses become less available to care for patients.
Recommendation 4: Training Model for ICU Tiered Staffing Strategy for COVID-19 Pandemic
Illustrated in this model (Figure 7) is a team composed of two ICU nurses; each nurse trains one re-assigned nurse and together they provide care for two critically ill patients. Training should only be added for the re-assigned nurse to care for two patients (hopefully, at least one of which is ventilated) under the direction of an ICU-trained nurse. This will orient the re-assigned nurse as well as orient the ICU-trained nurse as to what tasks and responsibilities will be assigned, divided, and shared. In the training, ventilator management should be the main focus, including modalities, high PEEP considerations, O2 saturations, ABG interpretation, suctioning, proning, sedation, paralytics, and pain control, though sedation vacations must be reviewed by medical staff as to risk versus benefit.
Nurs. Rep. 2020, 1, FOR PEER REVIEW 8 respiratory therapist and other clinicians such as physicians, nurse anesthetists, or pharmacists who are experienced in managing ventilated patients. There are four ICU nurses in each team; each nurse is responsible for supervising the other three re-assigned nurses and each re-assigned nurse will care for two patients. Ultimately each team will provide care for 24 patients and the four teams together will provide care for 96 patients [16]. This strategy is an alternative strategy that may be implemented as ICU-trained nurses fall ill and ICU-trained nurses become less available to care for patients.
Recommendation 4: Training Model for ICU Tiered Staffing Strategy for COVID-19 Pandemic
Illustrated in this model ( Figure 7) is a team composed of two ICU nurses; each nurse trains one re-assigned nurse and together they provide care for two critically ill patients. Training should only be added for the re-assigned nurse to care for two patients (hopefully, at least one of which is ventilated) under the direction of an ICU-trained nurse. This will orient the re-assigned nurse as well as orient the ICU-trained nurse as to what tasks and responsibilities will be assigned, divided, and shared. In the training, ventilator management should be the main focus, including modalities, high PEEP considerations, O2 saturations, ABG interpretation, suctioning, proning, sedation, paralytics, and pain control, though sedation vacations must be reviewed by medical staff as to risk versus benefit. Rationale A significant number of critically ill patients will be admitted to intensive care units during the COVID-19 pandemic. Staffing will be further strained by the threat of experienced ICU staff nurses becoming ill [26]. During the COVID-19 pandemic, it is anticipated that the projected shortfall of welltrained ICU nurses will impact the care of critically ill ventilated patients. Consequently, the focus should not be only to increase the numbers of mechanical ventilators but must also address the number of trained critical care nurses required to care for mechanically ventilated COVID-19 patients, alongside non-COVID-19 patients requiring ICU care [25,26]. Assigning hospital nurses to work immediately in ICU during crisis time without enough training may put the nurse and patients at high risk. Therefore, planning for appropriate nursing staff prior to such a pandemic is required. Augmenting critical care nursing staff is one innovative way to scale up staffing capacity during a pandemic. Individual healthcare organizations must modify their strategies thereby aligning ICU staffing with their patient needs and with available resources [25,26]. In this strategy, consideration should be made to have already chosen and delegated non-ICU-trained nurses to be stationed in the ICU and be assigned to an ICU nurse in order to form a controlled baseline training prior to the actual surge. This will establish roles and responsibilities and form the foundation to build an expanding team when a surge becomes evident.
Conclusions
In anticipation of COVID-19 demands upon nursing staff and subsequent potential weakening of care levels in the provision of patient care, specifically in the ICU setting, a panel was formed to raise and answer critical concerns. The nursing surge capacity of critically ill patients with COVID- Rationale A significant number of critically ill patients will be admitted to intensive care units during the COVID-19 pandemic. Staffing will be further strained by the threat of experienced ICU staff nurses becoming ill [26]. During the COVID-19 pandemic, it is anticipated that the projected shortfall of well-trained ICU nurses will impact the care of critically ill ventilated patients. Consequently, the focus should not be only to increase the numbers of mechanical ventilators but must also address the number of trained critical care nurses required to care for mechanically ventilated COVID-19 patients, alongside non-COVID-19 patients requiring ICU care [25,26]. Assigning hospital nurses to work immediately in ICU during crisis time without enough training may put the nurse and patients at high risk. Therefore, planning for appropriate nursing staff prior to such a pandemic is required. Augmenting critical care nursing staff is one innovative way to scale up staffing capacity during a pandemic. Individual healthcare organizations must modify their strategies thereby aligning ICU staffing with their patient needs and with available resources [25,26]. In this strategy, consideration should be made to have already chosen and delegated non-ICU-trained nurses to be stationed in the ICU and be assigned to an ICU nurse in order to form a controlled baseline training prior to the actual surge. This will establish roles and responsibilities and form the foundation to build an expanding team when a surge becomes evident.
Conclusions
In anticipation of COVID-19 demands upon nursing staff and subsequent potential weakening of care levels in the provision of patient care, specifically in the ICU setting, a panel was formed to raise and answer critical concerns. The nursing surge capacity of critically ill patients with COVID-19 in the ICU was addressed through searching available evidence. Substantiation was retrieved from a variety of databases inclusive of published and unpublished studies. The retrieved studies were then reviewed by a minimum of two reviewers independently using JBI critical appraisal tools. The recommendations in the recent guidelines covered ICU nursing surge capacity strategies. We recommend that hospitals implement the evidence-based strategies that have been shown to be effective such as a team-based approach, and to establish other innovative strategies for ICU nursing staff surge capacity in the COVID-19 pandemic. As new evidence presents itself, further updates of the guideline will be issued. | 2020-09-10T10:24:46.775Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "b7b86e9a18a172563550dff422454ff73f3468a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2039-4403/10/1/4/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3150f31687f2775f3515ee8def82557e05b06f06",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234608081 | pes2o/s2orc | v3-fos-license | Garget Preventive Measures of Recipient Cows
This article presents the results of the experiment to establish the effectiveness of electrostimulation with the DiaDENCE-T device in biologically active zones of breast cows-recipients to prevent mastitis. The evaluation was carried out by determination of somatic cells in milk in control and experimental groups, as well as analysis of milk productive indices at the beginning and at the end of the experiment. During the whole experiment in the experimental group of cows-recipients a stable number of somatic cells was noted, which is within the limits of norm (in cows of black-motley breed the normal content of somatic cells up to 400 thousand/cm3), while in cows of control group since the beginning of the experiment the content of somatic cells increased by 114% and made up 784.2 thous/cm 3 . The mass fraction of fat and protein in the control group was found to decrease by 19% and 5% due to the presence of subclinical mastitis in cows. In the experimental groups, the mass fraction of fat and protein was within normal limits. The control group of animals recorded an increase in white blood leukocyte content of the experimental group remained virtually unchanged and within the generally accepted norms. The results of the experiment confirm the effectiveness of the proposed method and can be used in animal husbandry to prevent mastitis.
Introduction
Increasing the productivity of cattle and the marketability of the dairy sector is of high importance among the most important directions of the livestock breeding development. The root problem in the development of cattle breeding is the presence of a healthy herd on farms. For high quality milk production, a healthy cow udder is a key factor. However, milk hygiene and udder health problems are common throughout the world. One of the reasons for culling cows from the herd is to detect untreated mastitis or its consequences. Huge economic losses from mastitis are due to the loss of milk, waste of money on treatment and prevention of mastitis, and premature culling (Baryshev, 2016;Wojtenko, 2013;Bardhan, 2013;Steeneveld, Hogeveen, Barkema, 2008).
Literature Review
Mastitis is a complex disease, which is manifested by inflammation of udder tissue. Inflammation of udder in cows is noted in all physiological periods and processes of functional activity. The disease may originate under the influence of a variety of factors, the main of which include a decrease in the body's resistance, violation of milking technology, the penetration of microorganisms through the teat canal, with injuries to the teats and skin of the udder, retention of the posterior, etc. (Tuyakova, 2014;Bhutto, 2010;Fasulkov, 2012;Mukherjee, 2010).
In the course of the disease, the nature of the inflammatory process and the basic characteristics of mastitis are divided into subclinical and clinical mastitis. Subclinical mastitis is registered 5-10 times more frequently than clinical mastitis. Subclinical mastitis is specifically affected by individual groups of alveoli or udder parenchyma lobes. The incidence of mastitis varies between 19-23%, with a ratio of clinically expressed and latent mastitis of 1:3-1:7 (Butrakov, 2014;Hiitiö, 2017;Kurjogi, Kaliwal, 2014;Memon, Javed, et al., 2012).
In livestock farms, 25.5-58.9% of animals suffer from mastitis annually. And the form of subclinical mastitis is noted in 31.9% of cows, and the clinical form was found in 7.5% of cows.
The frequency of manifestation of the form of subclinical mastitis prevails over clinical mastitis, and in 20-30% of cases subclinical mastitis is further transformed into the clinical form. The time interval of transition from one form to another and up to the moment of its detection may vary from several weeks to several months. The difficulty in detecting subclinical mastitis in time is that the disease is not clearly symptomatic. To prevent the development of clinical mastitis and atrophy of udder lobe, it is necessary to diagnose and treat subclinical mastitis in time. Despite the seriousness of the disease, subclinical mastitis is rarely detected on dairy farms and farms (Butrakov, 2014;Kayitsinga, 2017;Levison, Miller-Cushon, Tucker, Bergeron, Leslie, Barkema, et al., 2016;Sarba, Tola, 2017).
The World Organisation for Animal Health reports that mastitis damage far exceeds all cow diseases combined. Every year, more than 50% of the world's herd suffers from mastitis. It is found that if one quarter of the udder is affected, the udder milk yield is reduced by 20% and two quarters by 40%, which then leads to low milk productivity (Bessonova, 2015;Bérdy, 2012;Rato, Bexiga, Florindo, Cavaco, Vilela, Santos-Sanches, 2013).
The annual total damage from mastitis in the world is estimated at $35 billion. In the U.S. alone, about $2 billion is spent annually on treatment and prevention of mastitis in cows, but even at such high costs as a result of subclinical mastitis cow disease the losses reach $960 million. In Canada, mastitis cost up to $2 billion, with the main financial loss coming from subclinical mastitis.
In England, mastitis is 22% of the population -damage is estimated at 64.87 million dollars.
In Germany, the rate of mastitis disease in cows reached 29.9% and the damage was estimated at $197.7 million. The economic loss in Denmark is $20.56 million and about 28% of the herd suffers from mastitis. Japan's mastitis damage is estimated at $79.1 million. The cost of cow mastitis activities in the Netherlands is estimated at $45.0 million. In Poland, mastitis losses from cows are estimated at about $90.45 million (Bardhan, 2013;Hiitiö, 2017;Kayitsinga, 2017;Levison, et al., 2016;Zadoks, Middleton, McDougall, Katholm, Schukken, 2011).
To date, preventive measures in livestock farms include a number of works. The main of such activities include organizing a group of cows prone to mastitis, providing quality feed and appropriate housing conditions, reducing stress, strict adherence to milking technology, as well as diagnosis for cow mastitis. Diagnosing subclinical mastitis is an expensive and time-consuming process that requires an udder examination by a veterinarian and special chemical tests for milk analysis (Baryshev, 2016;Perepeluk, 2012; Kurjogi, M.M., Kaliwal, 2014; Taponen, Liski, Heikkilä, Pyörälä, 2017).
Currently, disinfectants are used to prevent mastitis in cows that cannot be considered safe due to their chemical composition. It is not uncommon for animals to get skin irritations, inflammations etc. after treatment with such preparations (Wojtenko, 2013;Kayitsinga, 2017;Memon, Javed, et al., 2012).
Antibiotics used as a substitute for disinfectants are hard to call a good alternative. Often antibiotics are injected in large quantities, the residues of which are then found in milk. Also the haphazard use of antibiotics in the treatment of mastitis contributed to the development of antibiotic resistance, and as a result, reduced treatment efficiency due to the formation of drug-resistant strains of microorganisms. In addition, it has been established that antibiotic drugs can have a negative impact on animal immunological reactivity, which may explain the lack of effectiveness of treatment (Baryshev, 2016;Perepeluk, 2012;Bérdy, 2012;Rato, et al., 2013;Taponen, et al., 2017).
In cattle embryo transplantology, mastitis is also ranked not least. It is very important that donor and recipient cows are healthy animals. Otherwise, there is a high probability of unhealthy calves or premature termination of pregnancy, which again leads to high economic losses. -mastitis udder assessment is required.
To prevent mastitis in cows-recipients it is recommended to electrostimulate the biologically active zones of the mammary gland with the device DiaDENCE-T for 4 minutes at the frequency of 10 ÷ 15 Hz, the procedure is carried out daily for 10 days.
In this regard, it is important to strengthen preventive measures against mastitis and prevent its transition to clinical status. Treatment of biologically active zones (BAZ) of mammary gland of cows by electrostimulation is considered to be the safest, which has no negative impact either on the animal body or on milk received from it.
Methods
Scientific researches have been carried out on the basis of dairy farm "Mikhailovskoye" of Prokopievsky district of Kemerovo region. In order to prevent the development of such a dangerous disease in selected cows-recipients it is proposed to use the device DiaDENCE-T, designed to provide general regulatory impact on the physiological systems of the body and to treat functional disorders in a wide range of diseases. Dynamic electroneurostimulation is performed by impulses of electric current on biologically active zones of animal organism.
To confirm the effectiveness of the proposed method, a number of factors were observed in the experiment, increasing the probability of mastitis infection in cows-recipients up to 47%: 1) age of cows-recipients -4 years; 2) lactation stage -10th day after calving; 3) keeping conditions (heat, humidity, draughts).
The study was conducted on selected recipient cows divided into three groups: Control group (n=6) -cows that were not exposed to the device on breast biologically active zones (BAZ) and other prophylactic procedures to prevent mastitis.
I test group -biologically active areas of the breast of cow recipients were treated with the device DiaDENCE-T for 4 minutes at a frequency of 10÷15 Hz. The device was moved from the udder base to each nipple (1 BAZ 1 minute exposure time).
II test group -biologically active areas of the breast of cow recipients were treated with the DiaDENCE-T device for 8 minutes at 20÷30 Hz. The device was moved from the udder base to each nipple (1 BAZ 1 minute exposure time).
Electric stimulation was carried out daily for 10 days (according to the instructions, to the device).
Animals of all groups were in the same conditions of feeding and keeping, at the same lactation stages.
Before the experiment, a general examination of the selected cows was carried out: body temperature, cardiac rate and frequency of breathing were measured (Table 1). After processing with DiaDENCE-T, these indicators were measured once again to confirm the innocuousness of the proposed method. As can be seen from Table 1, there were no differences in physiological indices in cows before and after the experience, which indicates that the DiaDENCE-T device is harmless.
Results
The efficacy of electrical stimulation with the DiaDENCE-T device in the biologically active zones of the mammary gland of the cow receivers was evaluated by determining somatic cells in the milk in the control and experimental groups, as well as by analyzing milk productivity at the beginning and end of the experiment (on day 11) (Figure 1, Table 1). Physiological norm of somatic cells content in milk is considered to be from 100 to 500 thous/cm 3 (normal somatic cells content up to 400 thous/cm 3 in cows of black and motley breed). As it can be seen from Figure 6, after 10 days in unfavorable conditions in cows of the control group the content of somatic cells in milk increased (by 114% and made up 784.2 thous/cm 3 ), which indicates the development of inflammatory processes (mastitis) in cows.
Figure 1. Efficiency of mastitis prevention with DiaDENCE-T device at various parameters
During the whole experiment, a stable number of somatic cells was observed in the experimental group of cows-recipients, which is within the normal range. Thus, electrostimulation of biologically active zones of the breast with DiaDENCE-T device has a preventive effect and prevents the development of inflammatory process (mastitis) ( The reduction of mass proportions of fat and protein in the control group by 19% and 5% is also due to the presence of subclinical mastitis in cows. In I and II experimental groups, the mass fraction of fat and protein was within the norm, which also confirms the effectiveness of the proposed method.
Discussion
According to the obtained results it was decided that it is not rational to use the method of breast treatment in the II experimental group. The method assumes the choice of higher frequency of electrostimulation (20÷30 Hz) and longer time of treatment (8 minutes), while in the 1st experimental group the same quality parameters of milk are achieved at 4 minutes of electrostimulation treatment 10÷15 Hz.
The impact of the proposed method of treating the breast glands of cow donors can also be determined by blood morphology. The results of the study are presented in Table 3. As can be seen from the data obtained, the number of erythrocytes increased by 3.2% after 10 days of electrostimulation with DiaDENCE-T, the values of leukocyte formula remained practically unchanged.
The most important indicators are the number of leukocytes to blood of cow receivers in control and experimental groups. The control group showed an increase in white blood cell count at the end of the experiment of 12.33×10 9 /l, which is 36% more than before the experiment. Such a sharp increase in white blood cells indicates the onset of inflammatory processes in the body, in particular the development of mastitis. In the experimental group the content of leukocytes remained practically unchanged and was within the generally accepted norms.
Conclusion
Thus, one of the most common diseases is mastitis in cows. Subclinical mastitis is considered to be one of the most common and dangerous diseases, because it has no visible signs and is difficult to determine in the early stages of development. Every year, farms spend huge sums of money on activities aimed at diagnosis, prevention and treatment of mastitis, which in turn leads to higher prices of products offered in the market. The presented results confirm efficiency of the offered method and can be recommended to farms for prevention of mastitis at cows-recipients that will allow reducing economic losses in connection with reception of a healthy herd.
reproduction of high-value breeding dairy cattle resistant to leukemia virus" unique identifier agreement RFMEFI60718X0208. | 2021-05-17T00:03:00.908Z | 2020-11-02T00:00:00.000 | {
"year": 2020,
"sha1": "f5e59b3137edfaeee6b2d044c92d301d29d8d726",
"oa_license": "CCBY",
"oa_url": "https://scienceopen.ru/sites/default/files/zubova.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a516200f3c6ea2db3e267cb0ae3c755bb17cfa0c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
212828389 | pes2o/s2orc | v3-fos-license | Fault Diagnosis of Distribution Terminal Units’ Measurement System Based on Generative Adversarial Network Combined with Convolutional Neural Network
As the condition monitoring and control device in the distribution automation system, the abnormal or fault state of distribution terminal units’ measurement system will negatively affect the quality of measured electrical quantities, and therefore, the fast and accurate discrimination of the abnormal state’s data will improve the reliability of distribution automation system. This paper proposes a method, which is based on generative adversarial network (GAN) combined with convolutional neural network (CNN), to discriminate the specific fault category of distribution terminals’ measuring electrical data. Firstly, four fault state characteristic time-frequency domain graph models of terminals’ AC voltage sampling data are established, which based on Fourier transform (STFT). Then, take advantage of GAN’s reconstruction of input graph data, to generate additional time-frequency sample graphs expanding the sample size of training set, which will be used to train a CNN to diagnosis and classify the fault state of terminals from measuring data. Finally, three training sets with different capacity expansion modes are set up to compare and verify that the method of GAN combined with CNN proposed in this paper improves the discrimination accuracy on the fault data, and the validation of diagnosis terminals’ measurement system.
Introduction
The distribution terminal units, which usually installed in the medium voltage substation, are the important condition monitoring and control equipment in distribution automation system [1][2][3]. Affected by the installation environment, different product quality and performance factors, the terminals' measurement system frequently dysfunction or abnormally operate, for these relative problems, however, lacking online fault diagnosis and discriminant methods make the negative effect of terminals' fault much worse [3]. Therefore, it is necessary to study a method to fully exploit the information potential contained in the measured electrical data of terminals, and to diagnosis measurement system's fault by judging whether the electrical monitoring data state of the distribution terminal are normal, so as to improve the reliability of the terminals. As for the research on discrimination method of electrical equipment's abnormal feature data, literature [4] established the BP neural network to complete identification fault type based on some typical faults sampled data of voltage transformer, however, the identification accuracy and generalization performance of the method still have great room to improvement. Literature [5] used Elman neural network's strong adaptability to diagnose the fault of high-voltage measurement system, but the capacity of analysis feature samples was not enough, which might affect the results of fault diagnosis and discrimination. Literature [6][7] respectively used the method of Wavelet Transformation to extract feature of fault and comparison of transverse and longitudinal current to identify the fault of the transformer. However, the selection of fault features was subjective and incomplete to a certain extent. Literature [8] deeply analysed mechanism and equivalent model of harmonic type fault for transformer, but did not conduct any in-depth research on discrimination based on data analysis.
For the research on deep learning methods in the aspects of learning data feature and identification of data pattern, literature [9] realized the accurate classification of time-frequency diagrams of different bearing vibration faults through deep CNN. Literature [10] studied Wavelet Transform method to generate wavelet map for the vibration signals of loose transformer windings and loose iron cores, and generate grey-scale feature graph, which were used to train CNN, and then successfully realized identification of the vibration signals the deep net. The above studies are all based on CNN and have achieved good results in various pattern recognition and classification problems. However, the sample size of training required by the CNN is large, and the over-fitting problem of the model on the small-scale training sample has not been solved yet. For the enhancement and repair of feature data, literature [11] trained GAN to learn the ability of generating new face pictures, which improved its great benefit on data refactoring and enhancement.
This paper proposes a deep learning method based on GAN combined with CNN, aiming to accurate discrimination abnormal voltage measured by distribution terminals' measurement system. Firstly, four fault state characteristic time-frequency domain graph models of terminals' AC voltage sampling data are established, which based on STFT. Then, take advantage of GAN's reconstruction of input graph data, to generate additional time-frequency sample graphs expanding the sample size of training set, which will be used to train a CNN to diagnosis and classify the fault state of terminals from measuring data. Finally, three training sets with different capacity expansion modes are set up to compare and verify that the method of GAN combined with CNN proposed in this paper improves the discrimination accuracy on the fault data, and the validation of diagnosis terminals' measurement system.
Fault feature model of voltage data
According to the characteristics of ac voltage between the normal and abnormal state of the distribution terminals' voltage measurement and acquisition system, the four feature data models are respectively established, as the object of data discrimination and fault diagnosis.
Measurement data model of normal state
Due to the interference from environment, the distribution terminals' measured voltage data might contain a certain amount of random noise in actual operation. Generally, it can be considered that the noise part of data is subject to Gaussian distribution. The normal state data model considering the error of acquisition and measurement is shown in the following equation: N is the measurement noise of Gaussian distribution with mean of zero and variance of 2 , in turn, A , and are the amplitude, angular frequency and phase angle of the measured voltage data. When the fault occurs in the signal processing module such as the transformer and signal transmission unit in the distribution terminals' measurement system, the accuracy of the measured signals will be distorted, and the noise part may has the feature of marked change. In the case of precision distortion fault, the average value of voltage measurements remains unchanged and the measurement variance changes, expressed as 2 ff (0, ) N . The fault data model at this time is shown in the following equation:
Measurement data model of decaying oscillation
When the measurement device in the distribution terminals is aging or decaying, the measured data usually have a certain characteristic of oscillation attenuation. The fault data model at this time is shown in the following equation: Where, bt e − is exponential attenuation coefficient; b is a real number greater than 0, which expressing the magnitude of exponential decay; is the Gaussian noise term of such measurement data, indicating the measurement error.
Measurement data model with harmonics
The measurement element in the distribution terminal may deviate from the fundamental frequency due to environmental interference or component fault. For example, electromagnetic transformer will occur when the excitation characteristic of electromagnetic transformer is degraded, resulting in the measurement containing high-order harmonic components [8]. The fault data model at this time is shown in the following equation: The data models of the distribution terminal established above belong to time-domain. In order to strengthen the classification characteristics of different data models, STFT is introduced to convert the digital data samples into time-frequency graph containing time-domain and frequency-domain feature information.
Time-frequency graph of the measurement data model
Based on the result of the STFT, the measurement data of different time-frequency graphs, as shown in figure 1-4, horizontal axis represents time sampling data and the vertical axis represents the different frequency components, and the depth of the graph's pixel colour represents the amplitude of different frequency component at the same time. Generally, the four different type time-frequency graphs are evidently different in the graphics characteristics such as shape, colour, therefore, it can be feed into CNN, converting the recognition of time-domain data to discrimination the feature pictures.
Discrimination of feature graph based on GAN combined with CNN
In order to give full play to GAN's ability of feature learning and data regeneration and CNN's huge advantages in image recognition. The identification and classification of fault feature data model, which based on GAN combined with CNN, comprehensively established in this paper is shown in figure 5. STFT is used to convert the AC voltage sample data into time-frequency feature graph samples, and the training set composed of time-frequency graphs of various types of faults feed into GAN to generate reconstructed time-frequency graphs, so as to expand capacity of four types of faults' training sets. Then, the expanded training sets are used to train CNN, so as to realize the classification and identification of four types fault feature data. The purpose of trained G is to imitate and master the distribution of sample data, making the performance of distribution (z) G on D, which expressed as ( (z)) DG , as consistent as possible with the performance of real data x on D, which expressed as D( ) x , so that D cannot distinguish the generated samples from real samples. The purpose of trained D is to correctly distinguish fake samples from generated samples. The above logical relationship can be expressed as maximum-minimum problem [11], and its mathematical expression is as follows: Where, x is characteristic sequence and z is random number sequence; r p represents the timefrequency feature graphs of the original true data; g p represents the time-frequency feature graphs of new fake data generated by G. D and G, through continuous cross optimization, improve their respective discrimination and generation ability respectively. This optimization process is to find the Nash equilibrium of the game process between them. When the training stops, the discriminator D could hardly distinguish the difference between the time-frequency graphs of distribution terminals' fault feature data, which generated by the generator, and the time-frequency diagram of the original data.
In order to facilitate the processing of time-frequency graphs of image types, deep convolutional generative adversarial networks (DC-GAN) are used in this paper, which is composed of two mirrored convolutional neural networks. In addition to the output layer of G and the input layer of D, batch normalization is added to all other layers, which will help stabilize the optimization process of objective function gradient to some extent.
Time -frequency graph recognition model based on CNN
Convolutional Neural Network (CNN) is a multilayer perceptron designed to identify two-dimensional feature graphs. It is a deep network model with multiple hidden layers, which can transform low-level features into high-level features through feature transmission layer by layer, so as to realize feature learning and expression. Compared with BP neural network and other shallow networks, CNN has a stronger ability to learn and express complex features, and a faster operation speed, and could avoid the problem of getting into the local extreme.
The time-frequency graphs of various data samples are feed into CNN, and then after process of pre-processing and normalization, the input graphs are successively processed through convolution layer, pooling layer and the full connection layer, and the feature extraction and dimensionality reduction of data are completed through activation function, and finally output the type number of fault feature data, which express the identification and classification of fault feature data.
As for the convolution layer of CNN, different convolution kernels are used to convolve with the feature graph, and the feature graph after dimensionality reduction is obtained by activating the function. The convolution process could be expressed as follows: Where: l j x represents the output j of the layer l ; l ij k represents a convolution kernel of the convolution layer; * represents the convolution operation; l j b represents the bias of the convolutional layer; () f is an activation function, and the ReLU function is commonly used one, which could expressed as follows: In the pooling layer, the maximum value (maximum pooling) or average value (average pooling) of the feature graph output for the convolution layer in each non-overlapping region of size nn is selected, so as to further reduce the dimension of the feature graph.
In the full connection layer, input all the one-dimensional feature vectors expanded by the feature graph of the previous layer, and output the final classification after weighted and imported into the activation function. In this paper, the Softmax function is used as the activation function of output layer. Based on the classification task implemented by CNN, the common loss functions include mean square error function, cross entropy function and negative logarithmic likelihood function. The cross entropy function which has good performance is selected in this paper, and the expression is as follows: Where, n is the number of fault feature samples; y is the true value; ŷ is the predicted value.
Power distribution terminal ac sampling data sample generation
By the function (1) -(4) representing the distribution of terminals' fault voltage feature sampling data samples, which were generated and simulated in Matlab, the set for sampling frequency of AC voltage is 250 HZ, and sampled four cycles of power frequency measurement data points (20 voltage measurement points) composed as one data sample. Adjust the parameters of each data model for simulation, as seen in table 1 for specific simulation parameters and sample size.
300
Damped oscillation [5,20] bU Operating Matlab, use spectrogram function, which is a short-term Fourier analysis function, to convert the simulated sampled voltage data into time-frequency graph samples. Based on Tensorflow, which is the deep learning framework of Google, the processing of data time-frequency graph clipping and data standardization is carried out. After that, the network architectures of GAN and CNN were built and corresponding model parameters were set, and corresponding parameters are shown in table 2 and table 3.
Combined neural network structure and fault identification results analysis
This paper specifically selected LeNet-5 as CNN network structure, and Adam algorithm is adopted to optimize the loss function. The global learning rate is set to 0.0001, and each batch is 20 samples.
Against the new time-frequency graphs of all kinds of data samples generated by generator G in neural network, take normal data and attenuated oscillation data as examples. After training GAN with different periods, new time-frequency graph samples of data are generated, as shown in figure 6.With the increase of training step length, the time-frequency graph generated by GAN is closer to the real training sample, that is, the more features of simulation data samples are learned, so that the random sequence can generate new time-frequency graph data samples after passing through multiple trained de-convolution layers, so as to expand the sample size of the training set. , which contains both real data samples and new data samples generated by GAN.
3) Training set C: in the real data sample set generated by the original simulation (the capacity is 800), 400 time-frequency graphs samples are randomly copied to form the expanded training set, which is composed of 4 (200 100) 1200 + = time-frequency graphs. The expansion mode is only random replication on the original sample set, and actually no data samples with new characteristics are added. The purpose of setting the training set is to eliminate the influence of the training set capacity on the CNN fault classifier so as to better compare the performance results of the method used in this paper.
Different training sets were input into the CNN network, and the loss function values on the training set with different training steps were counted and compared with the test accuracy on the test set. The results were shown in figure 7 and figure 8 respectively.
Conclusion
This paper presents a fault diagnosis method of voltage sampling module of distribution terminals with GAN and CNN. Firstly, this paper established normal state voltage sampling data model containing noise part and three types of approximate mathematical model of abnormal state data, then through STFT converting simulation data into a comprehensive time-frequency graphs, and the generated data samples by 2:1 is divided into the original training set and testing set. On the basis of GAN, feature learning, pattern generation and expansion were carried out for the original training set of various types of data samples. Then, the expanded samples were used to train the CNN (lenet-5) model, and the performance results of fault classification on the same test set were obtained through three training sets with different capacities and expansion modes. The simulation data examples and the training verification results of combined neural network showed that: 1) The time-frequency graphs obtained by STFT could be much easier to distinguish the sampling data of distribution terminals with different fault types.
2) After GAN generated new data samples, the training sets B had a good classification result on the testing sets, indicating that GAN could learn and regenerate the features of limited data samples and realize the expansion of a limited number of sample data.
3) In the training process of CNN with the simulation data of time-frequency graph containing noise, the input values of CNN loss function of different capacity and different expansion mode training sets have little difference, and they all have a good decline process. The fault classification model based on CNN would have a good difference effect on various types of fault data.
4) The accuracy of training sets with different capacities and expansion modes is different on the same test set. Compared with the original training data set without expansion and the expansion data set after simple random replication, the accuracy of identification of the expansion training set | 2020-02-27T09:30:16.289Z | 2020-02-25T00:00:00.000 | {
"year": 2020,
"sha1": "7956fcb45b5c32070270f0ad513b7b46bba840cc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/752/1/012016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "537fdd173d3c3664d7cfa22cac19bcc387de2ff2",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
233894720 | pes2o/s2orc | v3-fos-license | Effectiveness Assessment of an Innovative Ejector Plant for Port Sediment Management
: The need to remove deposited material from water basins is common and has been shared by many ports and channels since the earliest settlements along coasts and rivers. Dredging, the most widely used method to remove sediment deposits, is a reliable and wide-spread technology. Nevertheless, dredging is only able to restore the desired water depth but without any kind of impact on the causes of sedimentation and so it cannot guarantee navigability over time. Moreover, dredging operations have relevant environmental and economic issues. Therefore, there is a growing market demand for alternatives to sustainable technologies to dredging able to preserve navigability. This paper aims to evaluate the effectiveness of guaranteeing a minimum water depth over time at the port entrance at Marina of Cervia (Italy), wherein the first industrial scale ejector demo plant has been installed and operated from June 2019. The demo plant was designed to continuously remove the sediment that naturally settles in a certain area through the operation of the ejectors, which are submersible jet pumps. This paper focuses on a three-year analysis of bathymetries realized at the port inlet before and after ejector demo plant installation and correlates the bathymetric data with metocean data (waves and sea water level) collected in the same period. In particular, this paper analyses the relation between sea depth and sediment volume variation at the port inlet with ejector demo plant operation regimes. Results show that in the period from January to April 2020, which was also the period of full load operation of the demo plant, the water depth in the area of influence of the ejectors increased by 0.72 mm/day, while in the whole port inlet area a decrease of 0.95 mm/day was observed. Furthermore, in the same period of operation, the ejector demo plant’s impact on volume variation was estimated in a range of 245–750 m 3 .
Introduction
The presence of anthropic activity in the coastal environment strongly modifies waves, currents and sediment transport regimes. In particular, intense wave-induced currents and sediment transport rates are present around ports and commonly influence their commercial and recreational activities. The accumulated sediments reduce the admitted draft of the navigation channel on the one hand and generate erosional effects on the leeside coasts on the other. As a consequence, harbours frequently require ordinary maintenance dredging to remove the accumulated sediments [1]. Dredging is a consolidated and proven technology, but involves considerable drawbacks [2][3][4][5], since it has a notable environmental impact on the marine environment, contributes to the mobility and diffusion of contaminants and pollutants already present in the settled sediments, and obstructs navigation during its operational phases. Periodic hydrographic surveys of the harbour area are essential for the accurate determination of quantities and timing of maintenance dredging. Since maintenance dredging is often performed on an as-needed basis, regular surveys become an essential tool to properly time the work [6]. Furthermore, bathymetries are surveyed before and after dredging operations to estimate the volume of the sediment removed. On the other hand, the use of dredging equipment allows the measuring of the material dredged, which is usually defined by contract.
New approaches have been developed over the years as alternative methods to dredging. In particular, [7] classifies alternative solutions to dredging in three categories: (i) anti-sedimentation structures, (ii) remobilizing sediment systems, and (iii) sand bypassing plants. Anti-sedimentation structures considerably reduce the amount of sediment to be removed from harbour inlets but present environmental concerns and still require sediment removal [8]. Remobilizing sediment systems use an injection of water or the movement of mechanical devices such as dredger propellers to cause the lift and the separation of the grains from the seabed. In particular, water injection dredging has been widely applied as a cheaper and less impacting solution than traditional dredging [9]. Nevertheless, environmental issues due to the lack of control of the resuspended sediment deserve further investigation [10,11], while some technical limitations are present if the sediment is mainly composed of sand. Sand by-passing plants have limited environmental impacts but are characterized by relatively high installation costs and often uncertain operational costs.
An innovative sand by-passing technology tested in the framework of the LIFE MARI-NAPLAN PLUS and STIMARE projects [12][13][14] is based on a patented undersea ejector able to keep the designed draft of the entrance channel over time through a continuous removal of settling sediments. If the sediment is properly handled, the instantaneous removal can eventually also produce benefits with regard to counteracting neighbouring erosion processes. The ejector ( Figure 1) is an open jet pump (i.e., without a closed suction chamber and mixing throat) with a converging section instead of a diffuser and a series of nozzles positioned circularly around the ejector. The ejector has a diameter of about 250 mm and a whole length of about 400 mm. Each ejector is placed on the waterbed and transfers momentum from a high-speed primary water jet flow to a secondary flow that is a mixture of water and the surrounding sediment. The sediment-water mixture is then conveyed through a pipeline and discharged in an area where the sediment can be picked up again from the main water current or where it is not an obstacle for navigation. Both water feeding and discharge pipelines are DN80 spiral tubes (external diameter of about 90 mm). Based on a preliminary test carried out in 2017 [14], it is known that with a primary water feeding flowrate of about 27 m 3 /h, a working pressure of about 2.4 bar, and a discharge pipeline characterized by a 60 m length, one ejector is able to convey a peak sand flowrate at the discharge pipeline of about 2 m 3 /h with a water pump power consumption of about 3.5 kW. In the same peak working condition, the whole discharge flowrate of one ejector is about 34 m 3 /h of water-sediment mixture, or a peak sediment concentration of about 6% in volume. Each ejector works on a limited circular area created by the pressurized water outgoing from the central and circular nozzles, whose diameters depend on the sediment characteristics such as, for example, the repose angle. By ejector integration in series and in parallel, it is possible to create or to maintain a seaway. The technology is reliable since, generally speaking, jet pumps have been applied since the 1970s for coastal applications. The ejector technology has been developed and tested starting in 2001 by the University of Bologna and the start-up Plant Engineering Srl. In 2005, the first experimental plant was realized and tested in the port of Riccione (Italy). In 2012, a second experimental plant [15,16] was realized in Portoverde Marina (Italy). Both installations have been realized at port entrances and were designed to handle sand. A third experimental installation was realized in 2018 in Cattolica (Italy), wherein for the first time the ejectors were applied in the management of silt and clay sediments and installed in a river channel [17].
An ejector demo plant has been realized at the port entrance of the Marina of Cervia (Italy) and has been in operation from June 2019 to September 2020.
The aim of the paper is to assess the ejector demo plant effectiveness, which can be defined as the ability to be successful in maintaining the navigability at the port entrance over time. The novelty of the study is related to the methodology applied for the evaluation of the impact of the ejector demo plant on both water depth and sediment volume variations at the port inlet. In fact, for dredging and other alternative technologies that operate over a short time, the evaluation of the impact is based on the comparison of bathymetries of the interested area before and after sediment removal; the ejector plant works continuously and so the effects are monitored over a long period. Therefore, natural sediment transport is also relevant and should be taken into account in the effectiveness evaluation. The prediction of sediment transport is very complex: it always needs an accurate phase of calibration and validation based on measurements and it requires a good sediment transport model driven by reliable input data of waves and currents, initial bathymetry, sediment characteristics, etc. A large amount of literature is available and interesting examples can be found [18][19][20][21][22]. Nevertheless, the analysis of effectiveness of a continuous working sand by-passing system has never been approached, and in previous applications the by-passed amount of sediment has been always estimated starting from dredging needs [23]. This paper's investigation was carried out by making a comparison of water depth and sediment volume variation over time before and after ejector demo plant installation through the analysis of bathymetries and metocean data. Therefore, the sediment transport rate in the area of Cervia port entrance was firstly assessed, starting from the analysis of the detailed bathymetries realized in the last 3 years. Moreover, the 3-year metocean climate on the site was analysed and discussed together with the bathymetric evidence. Then, the operation period of the ejector demo plant was compared with the two previous years, from June 2017 to June 2019. The effectiveness of the demo plant was evaluated on the basis of the different operation and control strategies tested.
Description of the Study Site: Cervia, North Italy
The harbour of Cervia ( Figure 2) is located in the coast of Emilia-Romagna region, Italy. It was designed and realized along an artificial canal to convey the salt produced in the near salt flats. Therefore, the harbour had a very important role in the past, interconnecting the land and the maritime markets. Maintenance dredging activities have been used since the first half of the nineteenth century, when docks started to be lengthened to balance coast advancement. A new design of harbours occurred during the 1970s when the local municipality decided to modernize the existing infrastructure and to realize a marina able to satisfy users' demands.
The Marina of Cervia currently extends over an area of approximately 43,000 m 2 with a capacity of around 300 berths. A further lengthening of the docks (20 m for the southern dock and 40 m for the northern dock) was planned and realized in 2009 by the municipality as a countermeasure against the coast advancement and to prevent port inlet sedimentation.
Nevertheless, traditional dredging and sediment handling through dredger propellers (the second being a remobilizing sediment technique in which dredger propellers are used to remobilize the sediment from the seabed) were still planned and periodically operated at the port entrance [14]. In the period 2009-2015, more than 17,000 m 3 of sediment was removed yearly, with dredging from the basin being responsible for a total expenditure of about EUR 1 million-i.e., a weighted average cost of EUR 8.31 per dredged cubic meter of sediment. Furthermore, in the same period, the Municipality of Cervia invested about EUR 350,000 in propeller operations, almost once per year. The physical-chemical characterization of the sediment present at the port inlet revealed that the main component is sand (97%), while the specific weight is 1.9 g/mL. The annual net longshore sediment transport in the area offshore the port of Cervia is estimated to be equal to zero [24]. Therefore, the Cervia port area is commonly defined as a convergence point for the annual longshore sediment transport, with the convergence point position being affected by annual wave climate. As it is common in all the Northern Adriatic Sea, the wave climate is characterized by severe storms mainly generated by north-easterly winds, named Bora, even if south-easterly winds, named Sirocco, may have relevant seasonal impacts [25,26], with the latter generally inducing the highest surge levels [27]. Details on the wave buoy are available at [28]. The wave roses in Figure 3 show the annual distribution of the significant wave height (left panel) and the peak period (right panel) versus the mean wave directions at the buoy showing: (i) the most energetic waves, up to 4.0 m in height, propagating from the sector 50-60 • N; (ii) the most frequent conditions, with wave heights up to 1.0 m, coming from 90 • N; (iii) the high-wave periods with values ranging from 9 to 11 s, coming from 90 • N; (iv) the most frequent values range from 5 to 7 s.
The Ejector Demo Plant of Cervia
As a possible alternative solution of the port sedimentation problem, the ejector technology was proposed and tested starting from 13 June of 2019. The ejector demo plant installed in Cervia has the main objective of guaranteeing navigability at the port inlet while in operation. The Cervia demo plant consists of 10 ejectors located at the port entrance, as shown in Figure 4, with in-and out-flow pipelines laying on the seabed, delivering the mixture discharge composed of the moved sediments and water in a location south of the port entrance channel. The Cervia demo plant also includes a fully automated and remotely accessible pumping station equipped with auto-purging filters. The Piping and Instrumentation Diagram (P & ID) of the pumping plant is schematically shown in Figure 5, where only one ejector line is drafted. There are two submersible pumps, each one feeding five ejectors. Each pumping line has an auto-purging disk filter: the auto-purging cycle is activated once the pressure drop in the filter reaches a certain level. The total pumped water flowrate is controlled by an inverter, while the flowrate for each ejector feeding pipeline is balanced through electrovalves. An air compressor can be used to inject compressed air in the line to easily identify the positions of the ejectors on the seabed. The total installed power is about 80 kW. A local meteorological station measuring both wind speed and direction has been installed to relate plant operation with sea weather conditions-in particular, when wind speed overcomes a predefined threshold, which indicates the risk of heavy sea, the water flowrate feeding the ejectors is set at the maximum value (about 30 m 3 /h per ejector) to guarantee a sufficient sediment suction and conveying capacity.
Ejector Demo Plant Operation and Monitoring
Cervia's ejector demo plant was operated continuously from June 2019 to September 2020, thus achieving the objective of the LIFE MARINAPLAN PLUS project-namely, the monitoring of performance and impacts produced [14] for a minimum period of operation of 15 months. Figure 6 summarizes the five operating phases in which ejector demo plant operation can be divided. In the first and second phases, the ejector demo plant was operated with a reduced load (25% and 50%, respectively) and manual control; such a strategy was necessary to limit pressure and power consumption, since some demo plant devices showed lower performances than the one declared by the suppliers. Then, the demo plant entered the third and fourth phases of operation, in which the full load of the demo plant was reached. Nevertheless, in the same periods a growing issue related to mussels (Mytilus galloprovincialis) fouling in the pipes and filters has been detected. The performance of the demo plant was highly affected by fouling, since a reduced water flowrate was seen for the ejectors, and a higher pressure was needed, thus dramatically increasing power consumption. This is why only 2 ejectors were in operation in the fifth phase. Therefore, in order to assess the effectiveness of the ejector demo plant, only the bathymetric survey collected in the period from June 2019 (before ejector demo plant operation) to April 2020 have been analysed. The bathymetries realized after May 2020 refer to the demo plant in operation with only two ejectors and are not comparable with the previous ones.
For the whole operating period, energy consumption and ordinary and extraordinary maintenance activities have been measured and computed to also evaluate the technical and economic efficiency of the ejector demo plant. Furthermore, the environmental impact of the ejector demo plant has been assessed; in particular, the impacts on (i) integrity of seabed sediments and communities, (ii) underwater noise, and (iii) greenhouse gases (GHGs) and pollutant emissions have been evaluated. The results of these monitoring activities will be included in following papers.
Analysis of Bathymetries: Water Depth and Sediment Volume Variation over Time
The analysis was carried out over 3 years, starting from June 2017 and ending in June 2020, in order to investigate, for similar loadings in terms of wave climate and seasons, the sediment transport at the port entrance (i) with propeller operation and dredging (2017-2019) and (ii) during the operation of ejectors (2019-2020). The chosen periods are characterized by similar wave climates as shown in the frequency tables in Appendix A. All the bathymetries collected have been commissioned by the Municipality of Cervia and have been carried out through a digital hydrographic ultrasound system (Hydrotrac model, manufactured by Odom Hydrographic Systems, Baton Rouge, Louisiana, U.S.) with narrow emission cone, with the resulting error estimated as not exceeding 3 cm. The water depths reference is the mean water level. Table 1 shows the 10 bathymetries (see Appendix B) considered for the aim of the paper, including timeframe and the relationship with sediment movimentation-i.e., dredging, propellers, and ejector demo plant operations. The bathymetries provided by the Municipality of Cervia include water depth measurements and the related coordination in AutoCAD files. QGIS 3.14 built-in Python was used by the authors to generate a model, while TIN interpolation, which is more useful for elevation, was used for interpolation; the assumed cell size was 5 m, the output raster Pixel sizes in X and Y are 0.1. The X and Y coordinate numbers are in the Project Coordinate Reference System (CRS) of WGS 84 (EPSG:4326) and the chosen unit is meters. In all the bathymetries, a common area can be identified (Figure 7). The water depth measurements in this area are present in each analysed bathymetry. The definition of cells was necessary because comparing water depth variation at each measured point of the available bathymetries is rather impossible due to the relatively low accuracy in measured point replicability. By considering only the complete 5 × 5 m cells, that are 486, a whole area of 12,150 m 2 can be defined. The size of the cell (5 × 5 m) is compatible with the area of the influence of one ejector. The red area identified in Figure 7 shows the area directly impacted by ejector demo plant operation, which is composed of 32 cells and measures 800 m 2 . The black dots represent the mooring points of the inlet (north) and outlet (south, outside the surveyed area) ejectors' pipes (see also Figure 4).
By using the QGIS built-in Python, the average water depth and volume were defined for each cell at each bathymetry. The base level was assumed to be −7 m only for computing purposes. Then, water depth and volume variation can be compared over time by considering subsequent bathymetries. Any time dredging and/or propeller operation were performed, the relative bathymetry (i.e., bathymetry ID numbers 01, 04, and 06 in Table 1) was considered as the baseline for the following ones.
Analysis of 3-Year Metocean Climate on the Cervia Site
Since the annual net longshore sediment transport in the area offshore of the port of Cervia is estimated to be equal to zero [24], sea storms are the most significant driving forces leading to sediment transport and coastal changes; therefore, the identification of each single sea storm is necessary to assess the port sediment management in Cervia. Following the same analysis of the Nausicaa wave buoy performed in [29,30] (in Appendix C the time series of the wave height is reported), a sea storm is defined as an event characterized by a significant wave height higher than 1.5 m (i.e., the chosen threshold value) and remains over this value for at least 6 h. Two storms are then considered as separate if the wave height decays below the threshold for 3 or more consecutive hours.
The study of marine weather events was completed with the calculation of the total energy E of each storm, identified through the integration of the significant height of the wave H s for the duration of the storm (dur), following the methodology performed in [31], subsequently adopted by [32] for local studies, to adapt the scale of ocean storms proposed by [30] to the Mediterranean context, as: The event was then classified, following [31], through the energy classification scale defined in Table 2. The scope of the energy classification is to only compare periods in which the sea storm characterization is similar, based on the following assumptions: (i) the longshore sediment transport can be considered as rather constant in the area and (ii) the higher the storm energy registered in a certain period, the lower the contribution to sedimentation or erosion of longshore sediment transport. Table 3 reports a list and characteristics of the identified sea storms occurring in the period 2017-2020, together with the contemporary sea level and the maximum sea level during storm measured at a close tidal station, and with the bathymetry surveys and sediment movimentation actions at the port entrance.
Results of Meteoclimate Analysis
The analysis was performed with the aim (i) of characterizing the extreme storm events mainly responsible for the short-term sediment movimentation occurring at the port entrance and (ii) of linking the amount of energy required for these events to the sediment movement at Cervia, in order to investigate the effectiveness of the ejector plant in 2019-2020, and to compare with the observations of the previous years (2017-2019), where no ejectors operated. Table 3. Sea storm events in the period 2017-2020, together with the list of bathymetry surveys and sediment movimentation at the port entrance: peak wave significant height (H s ), mean (T m ), and peak (T p ) wave period, mean wave direction (MWD), compass sector, storm duration (dur), storm energy (E), energetic class, sea level at the H s instant, and maximum sea level during storm. [ The yearly analyses of the sea storm events (Figure 8) show similarity among the considered years, with an average of 13 sea storm events per year. However, the seasonal distribution of the cumulated storm energy in Figure 9 is Although the overall wave conditions vary a lot over the years (see Figure 9), isolated periods with similar wave characteristics (in terms of cumulative storm energy) do occur, as highlighted in Table 4. For instance, the first (13 June 2017-28 December 2017) and last (9 January 2020-30 April 2020) periods analysed in Table 4 are characterized by a comparable number of sea storms (four and six, respectively) and amount of released energy (530 and 650 m 2 h, respectively).
Analysis of Water Depth Variation before and after Ejector Demo Plant Operation
After operation of propellers began in June 2017, the water depth at the port entrance appeared as reported in Figure 10, where the navigation channel presented a draft around −4 m, with 100 m width and 200 m length, while the northern area had a water depth greater than −2 m. The positions of the mooring points of the ejectors' inlet and outlet pipelines are also reported with black dots in Figure 10. Figure 11 shows the maps of water depth changes between two consecutive surveys in 2017-2018, where a hot-cold colour scale is representative of accumulated or eroded sediment volumes. In the first map, Figure 11a, seabed modification after summer and autumn seasons was observed, revealing in the first phase a sediment dynamic that mainly impacted the navigation channel, while in the second phase, the water depth variation was focused very close to the port docks.
In May 2018, a new propeller operation was needed since the port entrance was substantially closed-i.e., water depth under 2 m at the port inlet. Figure 12 shows the bathymetry realized after propeller operation, which is very similar to the one obtained in Figure 10. The water depth changes measured in the following bathymetry are shown in Figure 13 and reveal the same behaviour as observed in Figure 12-the sedimentation phenomenon is higher in the central navigation channel, while sediment moved from the surrounding areas.
From January to April 2019, the last dredging and propeller operation were realized before ejector demo plant installation. Figure 14 shows the bathymetry realized on 10 April of 2019. During this time, the sediment management activities affected a wider area, including the entrance to the docks and the inner channel. Nevertheless, after 2 months, new sedimentation occurred in the area in front of the entrance to the docks, as already observed in the previous years (see Figure 15a). (Figure 15b) is the reference bathymetry to evaluate ejector demo plant effectiveness in keeping a sufficient water level at the port entrance. The minimum water depth at the port entrance to guarantee navigability was 2.5 m, since under this level fisherman and leisure boats that use the Marina of Cervia usually start having navigability issues. Figure 16 has been realized on the basis of bathymetries from June 2019 to April 2020 (included in Appendix B) and shows if and where the minimum water depth was reached in the common area at the port inlet.
The first relevant result is that at the end of the monitoring period (end of April 2020) there is still present a navigable channel (i.e., water depth over 2.5 m) to enter the port of Cervia. It is interesting to note how in January 2020 the situation appeared critical in the area of influence of the ejectors, and that this critical situation negatively evolved from September 2019. Nevertheless, it should be considered that until February 2020, the ejector demo plant was not able to operate at full load, while starting from February 2020, the technical issues that limited the operation of the demo plant were solved.
Analysis of Volume Variation before and after Ejector Demo Plant Operation
Volume variation over time has been considered in both areas shown in Figure 7 (common area and ejector area). The parameter computed for the comparison is the water depth variation per day, expressed in mm per day, which is calculated by dividing the volume variation between two consecutive bathymetries by the area under consideration (i.e., common area or ejector area) and by the number of days between two consecutive bathymetries. The results are shown in Figure 17. The volume variation in both common and ejector areas has been related to the metocean data to evaluate if the positive water depth variation observed in Figure 17 can somehow be influenced by the natural sediment transport dynamic. In this case, the parameter computed for the comparison is the water depth variation per storm energy unit, expressed in mm per m 2 h, which is calculated by dividing the volume variation between two consecutive bathymetries by the area under consideration (i.e., common area or ejector area) and by the cumulated energy produced by the storms registered between two consecutive bathymetries (see Table 3 for metocean data). The results are shown in Figure 18. The implicit assumption in drafting Figure 18 is that sediment transport in both areas under investigation is mainly due to storms. Therefore, different periods considered in the analysis can be compared only if similar storm conditions occur.
Discussion
The analysis of Figures 10-15, it is clearly shown that dredging and propeller operation realized in the centre of the navigation channel partially solve the problem of navigability of the port inlet, since in a few months the hole created at the centre of the channel was covered again by sediment. In particular, part of the sediment came from the surrounding area, and it is expected that natural sediment transport in the area produced by storm and longshore transport also contributes. The impact of dredging and propeller operation on the water depth variation rate is also confirmed by the analysis of Figures 17 and 18. In particular, both figures show sediment dynamics that seem to impact more on the area of ejectors than in the whole common area. Such a dynamic is justified by the fact that the ejector area is directly affected by both dredging and propeller operation, meaning that after artificial deepening of the seabed, the ejector area is usually characterized by a water depth that is higher than the mean value of the common area. The result is that the ejector area works as a sediment trap after dredging or propeller operation. Moreover, in the ejector area, in the first period after dredging or propeller operation (13 June 2017-28 December 2017, 11 May 2018-10 October 2018, 10 April 2019-12 June 2019), we observed a higher water depth variation than in the common area, while in the period 28 December 2017-7 April 2018, which is characterized by the highest number of storms, related energy in the period as well as energy per day, the water depth variations in both areas are comparable. Furthermore, Figure 18 shows a relevant decreasing water depth variation rate per storm energy unit in the two consecutive periods from June to December 2017 and from December 2017 to April 2018, since in the first period the rate is −1.31 mm/m 2 h, while in the second period it is −0.34 mm/m 2 h. It can be concluded that after a faster variation of water depth in the ejector area, which is the one mainly affected by dredging and propeller operation, the water depth variation tends to homogenize in the common area. Therefore, the "artificial" increase in water depth modifies the natural sediment dynamic since the depression created in the port inlet attracts the nearby sediment and the sediment that is naturally transported in the area. A better option would be to not dredge or move the sediment via propeller operation along the navigable channel, but to work on the southern or northern areas. Such planning would be beneficial also in combination with ejector demo plant operation, since the dredging or propeller operation, which may be needed to remove sediment accumulation out of the ejector area for beach nourishment purposes, would not affect the integrity of the ejector demo plant.
While Figure 17 indicates that there is not a constant relationship between the water depth variation in the common and in the ejector areas, the same figure suggests that water depth variation in the common area before and after ejector demo plant operation has a comparable intensity, i.e., −0.5 ÷ 2.5 mm/day of mean variation, with the exception of the period 10 April 2019-12 June 2019, in which the water depth variation in the common area reaches −6.5 mm/day. Such a huge water depth variation rate is justified by the combined effects of (i) dredging and propeller operation realized on a wide area at the port entrance (see Figure 14) and of (ii) relative high storm energy measured in the period. The period 12 June 2019-6 September 2019 is characterized by a very low level of storm energy, which is almost zero, yielding the highest value in Figure 18. In this case, it is probable that the sediment transport has not only occurred due to the single storm registered in the period, but natural longshore sediment transport also contributes. Therefore, the longshore sediment dynamic is worth investigation in this area to better evaluate volume and direction of the sediment transport during nice weather periods.
The most interesting evidence of Figures 17 and 18 is that in the last period of operation of the ejector demo plant (9 January 2020-30 April 2020), which overlaps with phase 3 of operation (see Figure 6), a positive water depth variation (i.e., navigability increasing) in the area of the ejectors can be observed, while in the common area, a negative water depth variation was observed in the same period. The last period of operation of the ejector demo plant is comparable to the first period 13 June 2017-28 December 2017, since the two periods were characterized by similar energetic forcing from sea and similar metocean characteristics (see Tables 3 and 4). It is interesting to note that, in the comparing period, there is a relevant negative water depth variation, especially if compared with the common area. This fact suggests that in this period the impact of ejector demo plant operation is evident and contributed to keep the water depth almost constant in the ejector area.
By assuming the same mean rate of water depth variation for the period 13 June 2017-28 December 2017, i.e., −1.31 mm/m 2 h, it can be estimated that, without the ejector demo plant in operation in the period 9 January 2020-30 April 2020, the water depth would vary by about −0.855 m in the ejector area, while a mean water depth variation of about 0.081 m has been observed. Therefore, the net contribution of the ejector demo plant can be evaluated with a maximum water depth variation of 0.936 m, which corresponds to a maximum volume of sediment by-passed of about 750 m 3 .
Nevertheless, since the water depth variation rate in the period 13 June 2017-28 December 2017 may be influenced by the bathymetric changes produced in the area by the previous dredging operation, the potential impact of the ejectors has been evaluated by assuming the same mean rate of water depth variation of the following period, 28 December 2017-7 April 2018-i.e., −0.34 mm/m 2 h. If this water depth variation rate is applied, it can be estimated that without the ejector demo plant in operation in the period 9 January 2020-30 April 2020, the water depth would vary by about −0.223 m in the ejector area. In this case, the net contribution of the ejector demo plant can be evaluated in a maximum water depth variation of 0.304 m, which corresponds to a maximum volume of sediment by-passed of about 245 m 3 .
While the positive impact in the ejector area produced by the ejector demo plant operation is evident in the period 9 January 2020-30 April 2020 and can be estimated, the same cannot be said for the previous periods. The different impacts are justified by the different operation regimes imposed on the ejector demo plant. In particular, the previous periods are characterized by lower water flowrates feeding the ejectors. Nevertheless, it is not possible to exclude that some contributions were made by the ejector demo plant operation in the periods of 12 June 2019-6 September 2019 and 6 September 2019-9 January 2020. Further investigation is needed to design a model of sediment dynamic in the port entrance, which will be validated by the bathymetries realized before demo plant operation, and then the sediment dynamic in the same metocean frameworks registered during the ejector demo plant operation will be simulated to verify the net contribution of the ejector demo plant to water depth variation.
Conclusions
An innovative technology for sediment management in water infrastructure has been tested in the first industrial sized demo plant at the port entrance of the Marina of Cervia (Italy). The monitoring activities, concluded in September 2020, involved several activities, which include effectiveness, efficacy, and environmental impact assessments. This paper investigates the effectiveness achieved and the results demonstrate that the ejector demo plant was able to guarantee navigability at the port inlet after almost one year of operation (June 2019-April 2020). In particular, the maximum impact of the ejector demo plant on keeping the water depth at the desired level (i.e., over the minimum threshold of 2.5 m) was observed in the period January-April 2020, wherein the ejector demo plant was able to operate at the design water feeding flowrates and with an estimated by-passed sediment volume of between 245 and 750 m 3 .
Further investigation is needed to confirm the result through (i) the design of a model of sediment dynamics at the port entrance and (ii) the simulation of the sediment transport in the same metocean frameworks registered during the ejector demo plant operation in order to confirm the contribution on water depth and sediment volume variation. The activities will be carried out once the monitoring of water depth at the port entrance, which is still ongoing, in the second half of 2021.
Finally, based on the good performance of the ejector demo plant shown in the last period of operation (January-April 2020), an adaptation of the existing ejectors configuration is under evaluation to optimize demo plant operation. The hypothesis is to space out the ejectors and, at the same time, move them closer to the port entrance. Another hypothesis under investigation is reducing the number of ejectors installed to reduce whole power consumption. Acknowledgments: The study is part of the "STIMARE (Strategie Innovative per il Monitoraggio ed Analisi del Rischio Erosione, www.progettostimare.it (accessed on 19 December 2020))" project, funded by the Italian Ministry for the Environment and Protection of the Territory and the Sea, which aims to study the shoreline evolution presence of coastal defence structures with innovative monitoring techniques and strategies.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-08T00:04:20.788Z | 2021-02-12T00:00:00.000 | {
"year": 2021,
"sha1": "4cbba0ebafbbf84aaafa0f445358bc832b043f5d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/9/2/197/pdf?version=1614071396",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "90531bb5b1a8b54f7dfef936579703a2f77ee490",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
8244934 | pes2o/s2orc | v3-fos-license | The Stellar Ancestry of Supernovae in the Magellanic Clouds - I. the Most Recent Supernovae in the Large Magellanic Cloud
We use the star formation history map of the Large Magellanic Cloud to study the sites of the eight smallest (and presumably youngest) supernova remnants in the Cloud: SN 1987A, N158A, N49, and N63A (core collapse remnants), 0509-67.5, 0519-69.0, N103B, and DEM L71 (Type Ia remnants). The local star formation histories provide unique insights into the nature of the supernova progenitors, which we compare with the properties of the supernova explosions derived from the remnants themselves and from supernova light echoes. We find that all the core collapse supernovae that we have studied are associated with vigorous star formation in the recent past. Stars more massive than 21.5Msun are very scarce around SNR N49, implying that the magnetar SGR 0526-66 in this SNR was either formed elsewhere or came from a progenitor with a mass well below the 30Msun threshold suggested in the literature. Three of our four Ia SNRs are associated with old, metal poor stellar populations. This includes SNR 0509-67.5, which is known to have been originated by an extremely bright Type Ia event, and yet is located very far away from any sites of recent star formation, in a population with a mean age of 7.9 Gyr. The Type Ia SNR N103B, on the other hand, is associated with recent star formation, and might have had a relatively younger and more massive progenitor with substantial mass loss before the explosion. We discuss these results in the context of our present understanding of core collapse and Type Ia supernova progenitors.
INTRODUCTION
The identification of the progenitor stars of supernova (SN) explosions is one of the central problems of stellar astrophysics. In the case of core collapse supernovae (CC SNe: Types II, Ib, Ic, and derived subtypes) the progenitors are known to be massive (M > 8 M ⊙ ) stars whose inner cores collapse to a neutron star or a black hole. In a few cases, it has been possible to constrain the properties of the progenitor star using pre-explosion images or the turn-off masses of compact clusters Smartt et al. 2008;Vinkó et al. 2009) , but there are still many open issues regarding which stars lead to specific subtypes of CC SNe (for an extended discussion and a complete set of references, see Smartt et al. 2008;Kochanek et al. 2008). In the case of thermonuclear (Type Ia) SNe, the situation is much more complex. Although a CO white dwarf (WD) in some kind of binary is almost certainly the exploding star, the exact nature of the progenitor system has never been firmly established, either theoretically or observationally (see Maoz 2008, and references therein). When direct identifications are not possible, the properties of the progenitors can be constrained using the stellar populations around the exploding stars.
A number of studies have done this for SNe in nearby galaxies (e.g. Hamuy et al. 2000;Sullivan et al. 2006;Aubourg et al. 2008;Modjaz et al. 2008;Prieto et al. 2008;Gallagher et al. 2008), but this approach has important limitations. First, the available information, be it photometric (e.g. Sullivan et al. 2006) or spectral (e.g. Gallagher et al. 2008), is usually integrated over the entire host galaxy, although local measurements at the SN sites have been made for a small number of objects (e.g. Modjaz et al. 2008). This effectively ignores the metallicity and stellar age gradients that must be present in the host. Second, even in surveys that work with complete host spectra, the stellar populations are not resolved. Among other things, this means that the stellar light is weighted by luminosity, which can conceal many important properties of the stellar populations. In practice, the information that can be obtained from this kind of observations is restricted to average metallicities and ages, unless sophisticated fitting techniques are used to extract the star formation history (SFH) of the host (see Aubourg et al. 2008). Ideally, one would want to study resolved stellar populations associated with SN progenitors. The information that can be obtained in this way is much more detailed and reliable, but it requires focusing on very nearby SNe.
The present work is the first in a series of papers aimed at constraining the fundamental properties of CC and Ia SN progenitors in the Magellanic Clouds by examining the stellar populations at the locations of the supernova remnants (SNRs) left behind by the explosions.
To do this, we take advantage of the large amount of observational data accumulated on the stellar populations of the Clouds, in particular the star formation history (SFH) maps published by for the SMC and Harris & Zaritsky (2009) (henceforth, HZ09) for the LMC. To identify the sites of recent SNe, we rely on the extensively observed population of SNRs in the MCs (Williams et al. 1999). Much information about the SN explosions can be extracted from the observations of SNRs of both Ia and CC origin (Badenes et al. 2003;Chevalier 2005), and in some cases this information can be complemented by light echoes from the SNe themselves (Rest et al. 2005(Rest et al. , 2008. In this first installment, we focus on the eight youngest SNRs in the LMC: SN1987A, N158A, N63A, and N49 (CC SNRs); 0509−67.5, 0519−69.0, N103B and DEM L71 (Ia SNRs). This paper is organized as follows. In § 2 we describe the criteria that have led to the selection of our eight target SNRs. In § 3 we review their types and the characteristics of the parent SNe that can be inferred from their observational properties. In § 4 we review the fundamental details of the SFH map of the LMC presented in HZ09. In § 5 we discuss the relevance that the local SFH has for the properties of the SN progenitors, given our knowledge about the global SFH and the stellar dynamics of the LMC. In § 6 we examine the local SFHs for the target SNRs, with specific comments relating each SFH to the SNe that originated the SNRs. In § 7 we discuss the impact that our findings have in the context of our current understanding of CC and Ia SN progenitors. Finally, in § 8 we present our conclusions and we outline some avenues for future research.
TARGET SELECTION
We will focus on young SNRs because they are usually ejecta-dominated and still contain a great deal of information about their parent SNe -in particular, the risk of mistyping young CC and Ia SNRs is minimal (see § 3). Identifying the youngest SNRs in a given set, however, is not trivial. Among the SNRs in the LMC, only one has a known age (SN 1987A), and only three (0509−67.5, 0519−69.0, and N103B) have more or less accurate age estimates from light echoes (Rest et al. 2005). In the absence of consistent age estimates for all objects, size is the best criterion to select the youngest ones. Much information about the SNR population in the LMC can be found in the ROSAT atlas by Williams et al. (1999), but the SNR sizes in particular are not reliable and must be revised. Sizes of SNRs with sharp outer boundaries are overestimated due to the large ROSAT PSF (e.g. 0509−67.5, Badenes et al. 2007), while sizes of diffuse SNRs are underestimated due to the low ROSAT effective area (e.g. N23, Hughes et al. 2006). We have searched the literature for more recent Chandra observations to constrain the LMC SNR sizes, and we have selected the eight smallest objects (sizes < 1.5 arcmin, see Table 1).
The age estimates listed in Table 1 merit a few comments. For SNRs without SN or light echo information, ages are calculated from the SNR size assuming a specific model for the SNR dynamics, which can introduce large uncertainties. In particular, the standard dynamical models for young SNRs (e.g. Truelove & McKee 1999) ignore the effect of cosmic ray acceleration at the forward shock. It is now widely accepted that energy losses due to cosmic ray acceleration can affect the size of young SNRs in a noticeable way (Ellison et al. 2004;Warren et al. 2005), which implies that calculations based on unmodified SNR dynamics can overestimate the age by as much as 20% (see § 5.2 in Badenes et al. 2007, for a discussion).
In Figure 1, we illustrate the location of our eight target SNRs within the large scale structure of the LMC using the data from field 13 of the Southern H-Alpha Sky Survey Atlas (SHASSA, Gaustad et al. 2001). Two SNRs, SN 1987A and N158A, are located in the 30 Dor region, the most prominent active star forming region in the LMC. Two more, N49 and N63A, are in the northern part of the disk, embedded in the North Blue Arm discussed in HZ09 and Staveley-Smith et al. (2003). SNRs 0519−69.0 and N103B are in the outer parts of the LMC bar. The last two objects, 0509−67.5 and DEM L71, are in rather inconspicuous parts of the LMC disk, in the area called the Northwest Void by HZ09. More specific discussions about the location of each SNR will be given in § 6.
FROM SUPERNOVAE TO SUPERNOVA REMNANTS:
CORE COLLAPSE VS. TYPE IA 3.1. Typing SNRs Typing SNRs as CC or Type Ia can be an uncertain and treacherous business. Both CC and Ia SNe deposit a similar amount of kinetic energy (∼ 10 51 erg) in the ambient medium (AM), which often makes it impossible to distinguish mature CC from mature Ia SNRs based on their size or morphology alone. A much more reliable way to type SNRs is to examine the evidence left behind by the explosion itself: X-ray spectrum from the SN ejecta and AM, SNR dynamics, and properties of the compact object or pulsar wind nebula (PWN), if present. In general, this can only be done for relatively young objects (but see Hendrick et al. 2003;Rakowski et al. 2006). By using methods along these lines, we have been able to determine the type of all the objects in our list with a high degree of confidence, and in some cases even the SN subtypes within the broader CC and Ia categories. In this Section we will discuss the classification and SN subtypes of our target SNRs, but before going into the details of each object, it is important to mention the work of Chu & Kennicutt (1988). These authors attempted to type all the LMC SNRs known at the time by noting the distance from each object to HII regions and OB associations. Although this is a very crude method, their conclusions regarding the CC or Type Ia nature of our target SNRs coincide with ours, except in the case of SNR N103B, which will be discussed in detail in § 3.5 and 6.2.
Core Collapse SN Progenitors And Subtypes
Our theoretical understanding of core collapse SNe is still incomplete (Janka et al. 2007). In particular, the mapping between progenitor mass and CC SN subtype is uncertain, because key processes like stellar mass loss and binary interactions are not well understood (Eldridge & Tout 2004;Eldridge et al. 2008). To set the stage for further discussions, the stellar evolution models of Eldridge & Tout (2004) for single stars of LMC metallicity (Z = 0.008) predict that stars between 8 and 30 M ⊙ will explode as red supergiants, retaining most of their H envelope and becoming Type IIP SNe, stars between 30 and 40 M ⊙ will lose a large part of their envelopes and explode as Type IIL or Type IIb SNe, and stars above 40 M ⊙ will lose all their envelopes and become naked CC SNe of Types Ib and Ic. Within naked CC SNe, there is some evidence that Type Ic SNe, which are linked to long duration gamma-ray bursts (Galama et al. 1998;Stanek et al. 2003) come from more massive stars than Type Ib SNe (Anderson & James 2008;Kelly et al. 2008). Stars that retain a massive H envelope but explode as blue supergiants instead of red supergiants form a separate class, often referred to as SN 1987-like events. The lifetimes associated with these stellar masses range between 41 Myr for an isolated 8 M ⊙ star and 5.4 Myr for an isolated 40 M ⊙ star (always taking the Z = 0.008 models from Eldridge & Tout 2004). In principle, mass loss will be facilitated by binary interactions, leading to fewer red supergiants and more Type Ib/c SNe in binary systems, but stellar evolution calculations that include these effects are subject to an entirely different set of uncertainties .
From the point of view of the SNRs, the complex and turbulent structure of most young CC SNRs makes a quantitative interpretation of the X-ray spectrum in terms of specific explosion models and progenitor scenarios very challenging (e.g. see Laming & Hwang 2003;Young et al. 2006;Park et al. 2007). Many times, it is hard to infer the SN subtype from the observational properties of the SNR, but the large intrinsic diversity of CC SNe as a class can often be used to some advantage in SNR studies. Chevalier (2005) argues that several aspects of SNR evolution are expected to be very different depending on the subtype of the parent SN: mixing in the ejecta, fallback onto the central compact object, expansion of the PWN, interaction with the CSM, and photoionization of the AM by shock breakout radiation. Using arguments along these lines, Chevalier & Oishi (2003) inferred from the positions of the fluid discontinuities, the presence of high velocity H, and the extent of the clumpy photoionized pre-SN wind in the Cas A SNR that its progenitor must have been a Type IIn or Type IIb event. This 'prediction of the past' was later confirmed by the spectroscopy of the light echo of the Cas A SN (Krause et al. 2008a), which is very similar to the spectrum of the Type IIb SN1993J. Although this agreement is certainly encouraging, we must insist that studies based on SNRs are still a long way from providing a robust method of subtyping CC SNe -as an example, the Type IIn/IIb classification of Cas A by Chevalier & Oishi (2003) was challenged by Fesen et al. (2006), who argued for a Type Ib progenitor.
Core Collapse SNRs
In the following paragraphs, we examine each of the four target CC SNRs in more detail. For a summary, see Table 2.
SNR N49 -This SNR harbors one of only two magnetars known outside the Milky Way: SGR 0526−66. In principle, the presence of a compact object should im-mediately classify this object as a CC SNR, but the association between this magnetar and the SNR has been controversial . Even disregarding the compact object, the ejecta emission shows significantly enhanced abundances from O and Si, but a comparatively small amount of Fe (Park et al. 2003), and the SNR is located within the OB association LH53 (Chu & Kennicutt 1988). Taken together, these arguments lend strong support to a CC origin. The complex filamentary structure of the shocked AM suggests that dense material surrounded the SN at the time of the explosion ), which favors a progenitor with a slow wind, maybe a Type IIP SN. Unfortunately, this is just an educated guess, because the complex multiphase X-ray emission of the SNR and the poorly known age make the interpretation of the observations very challenging.
SNR N63A -This SNR has no detected compact object, although the upper limits do not exclude the presence of a low-activity PWN . It is embedded in the large HII region N63, and it also appears to be located within an OB association (NGC 2030, Chu & Kennicutt 1988, making a CC type very likely. The size, morphology, and X-ray spectrum show evident signs of a complex interaction with a highly structured AM, and they seem to indicate that it is expanding into a large cavity , which suggests a massive progenitor with a fast wind, maybe a Type Ib/c SN. However, there is no additional evidence to support this conclusion because the X-ray emission, which is dominated by the shocked AM, reveals very little about the properties of the SN ejecta . SN 1987A -The classification and subtype of SN 1987A are obvious from the SN spectroscopy. The vast amount of information available on this object is summarized in McCray (2007) and other publications in the same volume. For our purposes, it suffices to mention that the progenitor of this SN is known to have been a blue supergiant star, Sk −69 • 202, whose initial mass has been estimated at ∼ 20 M ⊙ (Arnett 1991), and might have been part of a close binary system (Podsiadlowski et al. 1990).
SNR N158A -This object harbors a well-observed PWN that types it as a CC SNR and constrains its age to be ∼800 yr Chevalier 2005). The SN subtype classification has been rather controversial. The SNR dynamics indicate that the shock wave is moving into dense, clumpy CSM similar to what can be found around a massive Wolf-Rayet star, and the presence of strong O and S lines in the X-ray spectrum of the innermost ejecta reveal that at least some of the heavy elements in the ejecta have not fallen back onto the central neutron star (Chevalier 2005). This would favor a massive Type Ib/c progenitor, but both the detection of H in the PWN filaments (Serafimovich et al. 2005) and a recent re-analysis of the ejecta emission seem to indicate that the progenitor might have been in the 20 − 25 M ⊙ range, implying a Type IIP explosion (for more detailed discussions, see Williams et al. 2008).
Type Ia SN Progenitors and Subtypes
Our current understanding of Type Ia SN progenitors is still extremely sketchy (Maoz 2008), but several interesting trends have been inferred from the bulk properties the host galaxies. Any theoretical model for Type Ia progenitors must account for the fact that Type Ia SNe explode in elliptical galaxies with very little star formation (SF), but at the same time the rate of Ia events in star forming galaxies appears to scale with the specific star formation rate (Mannucci et al. 2006). Moreover, Type Ia SNe exploding in elliptical galaxies are on average dimmer that those exploding in star forming galaxies (Hamuy et al. 1996;Hamuy et al. 2000). Scannapieco & Bildsten (2005) and Mannucci et al. (2006) used these observational facts to postulate two populations of Type Ia SN progenitors: a 'prompt' population with short delay times (of the order of a few hundred Myr), associated with recent SF and leading to somewhat brighter SN Ia, and a 'delayed' population with longer delay times (of the order of Gyr), not associated with recent SF and leading to somewhat dimmer SN Ia. In this two-component model, the specific Type Ia SN rate in a given galaxy is expressed as SN R Ia = AM * + BṀ * , with M * being the total stellar mass in the galaxy andṀ * the specific star formation rate. It is important to stress that the observed rates do not require the existence of two componentsin some theoretical scenarios, Type Ia SNe from a single progenitor channel can explode with both very short and very long delay times (Greggio et al. 2008). Nevertheless, there have been several attempts to associate the prompt and delayed progenitor populations to the two leading theoretical scenarios for Type Ia SNe: single degenerate (SD) systems, in which the WD accretes material from a non-degenerate companion and double degenerate (DD) systems, in which the WD accretes material from another WD. So far, none of these attempts has succeeded (see f.i. Förster et al. 2006;Greggio et al. 2008), and the identity of Type Ia SN progenitors remains a mystery.
Despite all the uncertainties regarding their progenitors, Type Ia SN explosions as a class are much more homogeneous and have less intrinsic dispersion than CC SNe. In particular, there is a simple relationship between the structure of the ejecta and the peak brightness of the SN that is well reproduced by one-dimensional delayed detonation (DDT) explosion models (Mazzali et al. 2007). This allows to map the vast majority of SN Ia onto a sequence of bright to dim events based on the amount of 56 Ni that they synthesize. Generally speaking, Type Ia SNRs are also much less turbulent than CC SNRs, and most of them seem to be interacting with a relatively unmodified AM , although there are exceptions like the Kepler SNR (Reynolds et al. 2007) and SNR N103B (Lewis et al. 2003, , see also the discussion in § 7.2). Thanks to this set of circumstances, it is generally easier to interpret the X-ray emission of Type Ia SNRs quantitatively in terms of specific explosion models, provided that the dynamic evolution of the SNR and the nonequilibrium processes in the shocked plasma are properly taken into account (Badenes et al. 2003;Badenes 2004;Badenes et al. 2005). It is also possible to estimate the brightness of the parent event from the mass of 56 Ni synthesized by the preferred DDT ex-plosion model, as shown by Badenes et al. (2006) in the case of the Tycho SNR.
Type Ia SNRs
These four objects were classified as Type Ia SNRs by Hughes et al. (1995) based on their lack of compact object or PWN and the general properties of their X-ray emission, which is dominated by Fe lines and has only weak or absent lines from O. In order to confirm these classifications and derive the SN subtype, it is necessary to perform an in-depth analysis of the ejecta emission, as done by ) for SNR 0509−67.5. The X-ray emission of the other three objects will be the subject of a forthcoming publication (Badenes & Hughes 2009, henceforth BH09), but the main results of that analysis are presented in the following paragraphs, and summarized in Table 3.
SNR DEM L71 -This is the oldest object in our Type Ia SNR list, and the only one without a light echo age estimate. Ghavamian et al. (2003) determine an age of 4360 ± 290 from the SNR dynamics, and yet the X-ray spectrum appears dominated by shocked Fe from the SN ejecta, specially in the center . The old age of this SNR makes the analysis of the ejecta emission somewhat challenging, but BH09 find that it can be reproduced by DDT models for normal Type Ia SNe. SNR N103B -This object was initially classified as a CC SNR by Chu & Kennicutt (1988) based on its location at the edge of the HII region DEM 84 and 40 pc away from the OB association NGC 1850. However, the X-ray spectrum is strongly suggestive of a Type Ia origin Lewis et al. 2003). The SNR is also remarkable in that it shows a strong east-west asymmetry (Lewis et al. 2003), which has been interpreted as a sign of some kind of CSM interaction, mainly by analogy to the Kepler SNR. This asymmetry also makes the ejecta analysis challenging, but BH09 find a relatively good match to the spectrum using DDT models for moderately bright Type Ia SNe with a SNR age close to the 800 yr estimated from the light echo by Rest et al. (2005).
SNR 0509−67.5 -Next to SN1987A, this SNR has the most secure subtype classification in the LMC. In 2008, two teams analyzed independently the optical spectrum of the SN light echo (Rest et al. 2008) and the X-ray emission and dynamics of the SNR , and came to the same conclusion: SNR 0509−67.5 was originated ∼ 400 yr ago by an exceptionally bright Type Ia SN that synthesized ∼ 1 M ⊙ of 56 Ni. This agreement is a very important validation of the modeling techniques introduced in Badenes et al. (2003) that BH09 apply to the other three target Type Ia SNRs, and in particular of the capability of the models to recover the SN subtype 6 .
SNR 0519−69.0 -The final object in our Type Ia SNR list has an estimated age of 600 ± 200 yr from its light echo (Rest et al. 2005). Its X-ray emission is well reproduced by a moderately bright Type Ia SN model that synthesizes 0.8 M ⊙ of 56 Ni (BH09).
OVERVIEW OF THE STAR FORMATION HISTORY MAP OF THE LMC
The SFH map that we use in the present work is described in full detail in HZ09. The map was elaborated using four band (U, B, V, and I ) photometry from the Magellanic Clouds Photometric Survey ), which has a limiting magnitude between 20 and 21 in V, depending on the local degree of crowding in the images. Data from more than 20 million stars was assembled to produce color-magnitude diagrams in 500 24 ′ × 24 ′ cells encompassing the central 8 • × 8 • of the LMC (see Figure 4 in HZ09), and then the StarFISH code (Harris & Zaritsky 2001) was applied to derive the local SFH for each cell. Cells with enough stars in them, like the eight cells that contain our target SNRs, were further subdivided into four 12 ′ × 12 ′ subcells. The SFH of each cell is given at thirteen lookback times between Log(t) = 6.8 (6.3 Myr) and Log(t) = 10.25 (17.8 Gyr), and it is broken into four metallicity bins: Z = 0.008, 0.004, 0.0025, and 0.001.
For reference in further discussions, we reproduce the SFH of the entire LMC from HZ09 in Figure 2. The error bars on the total SFH represented with the gray shaded area are dominated by crowding effects (see § 3.3 in HZ09). Although no metallicities were fitted for ages below 50 Myr, the plots display the canonical LMC metallicity (Z = 0.008) in the most recent age bins, which is reasonable in view of the high degree of homogeneity in the metallicity of the ISM and the young stars in the LMC (Pagel et al. 1978;Russell & Dopita 1990;Korn et al. 2002;Hunter et al. 2007). A detailed discussion of the SFH of the LMC and its interpretation in the context of the LMC's past history can be found in § 5 of HZ09. For our purposes, it suffices to note that, after an initial episode of SF in the distant past, the LMC went into a quiescent period that lasted until 5 Gyr ago, and since then it has been forming stars at an average rate of 0.2 M ⊙ yr −1 , with episodes of enhanced SF at 2 Gyr, 500 Myr, 100 Myr, and 12 Myr. From Figure 2, it is obvious that the vast majority of the stars in the LMC have ages above 1 Gyr. Most of these old stars have metallicities of one tenth solar or lower.
ON THE RELEVANCE OF THE LOCAL STELLAR POPULATIONS TO SUPERNOVA PROGENITORS
During the lifetime of a galaxy, several processes naturally mix the stellar populations. These include both internal processes like the 'churning' of the disk by spiral arms (Sellwood & Binney 2002) and external processes like tidal interactions and mergers (Mihos & Hernquist 1994). In this context, the properties of the stellar population (and hence the SFH) in the neighborhood of a young SNR will only be representative of the SN progenitor up to a certain lookback time, t lb . In principle, t lb can be calculated for each location within a galactic disk provided there is a viable dynamic model that includes all the relevant processes. Unfortunately, no such model exists for the LMC, despite the wealth of observational information available. The LMC disk is warped (Nikolaev et al. 2004) and might also be flared (Subramanian & Subramaniam 2008), and it has a rich history of tidal interactions with the SMC and (maybe) the Milky Way, which may have important effects on the stellar dynamics (see Olsen & Massey 2007;Besla et al. 2007). The vestigial arms seen in HI (Staveley-Smith et al. 2003) are probably originated by these tidal interactions, but the details of this process are not well understood (Besla et al. 2007). Even the nature of the most prominent feature in the disk -the LMC bar -and its role in the dynamics of the galaxy are unclear (Zaritsky 2004).
Without a reliable way to calculate t lb for each of the subcells that contain our target SNRs, all we can do is estimate the relevant timescales for a number of different processes. The physical size of the subcells in the HZ09 map is 350 × 350 pc (assuming D = 50 kpc, Alves 2004), and the velocity dispersions for the young disk and old disk populations determined by Graff et al. (2000) are 8 and 22 km s −1 , respectively. Thus, the length of time that it would take an average star of the young (old) disk to drift from one subcell to the next in the absence of restoring forces, t d , is 43 (16) Myr. These timescales are not relevant for the progenitors of CC SNe, which should belong to the young disk, but they will be very important for SN Ia progenitors, which could be quite older. In any case, t lb should be much larger than t d , because (a) some regions of the LMC are more homogeneous than others, which means that stars have to drift over larger distances in order to find substantially different stellar populations, and (b) there are restoring forces like gravity that maintain the structural integrity of the disk and act to limit stellar drift.
We have quantified the spatial homogeneity of the stellar populations around our target SNRs in Figure 3. We plot the absolute value of the relative differences in the stellar populations as a function of the distance from the center of each of the eight subcells that contain the CC and Ia SNRs in our list. To calculate the relative differences, we have integrated the SFH in each neighboring subcell, taking all the time bins were the differences between the neighbor and the SNR subcell were statistically significant (i.e., the error bars did not overlap) up to a lookback time of 1.1 Gyr, and then divided by the total number of stars formed in the SNR subcell. At each distance, the relative difference is the mean of the relative differences between the SNR subcell and all the subcells at that distance. Figure 3 shows that some of our target SNRs are in remarkably homogeneous regions of the LMC disk. These include the CC SNRs N49 and N63A in the Blue Arm and the Type Ia SNR 0509−67.5 in the Northwest Void, with average relative differences in the stellar populations below 15% within 1400 pc of the central subcell. This distance translates into t d values of 215 and 80 Myr for young and old disk stars in the absence of restoring forces.
The effect of restoring forces on the value of t lb is more difficult to estimate. If no chaotic processes intervene, neighboring stars will tend to move together through the disk, which explains why some LMC structures like the bar and the Northwest Arm show up in the HZ09 map with lookback times as large as 1 Gyr (see their Figure 8). This long survival time is not restricted to large structures -in the Solar neighborhood, there is evidence that several groups of old stars (2 to 8 Gyr) are moving to-gether through the disk of the Milky Way (Dehnen 1998). But smaller structures like young stellar clusters seem to disappear on timescales of the order of 180 Myr in the LMC (Bastian et al. 2008), which is roughly equivalent to the dynamic crossing time of the LMC disk. We will adopt this value as a figure of merit for t lb in a single subcell of the SFH maps. Since the result of Bastian et al. (2008) applies to young stars, this implies that restoring forces increase the value of t d by at least a factor ∼ 4, but we stress that this is just a very rough estimate.
We conclude that the relevance of the local SFHs for Type Ia SN progenitors will depend on both the homogeneity of the stellar populations around each subcell ( Figure 3) and the age of the progenitors. If the lifetime of Type Ia progenitors in the prompt channel is as short as the 180 Myr claimed by Aubourg et al. (2008), it should be possible to use the local SFHs of Type Ia SNRs in the LMC to explore their properties. For progenitors with longer lifetimes, the stellar context of each SNR should be taken into account. Objects like SNR 0509−67.5 might allow exploration of timescales up to several hundred Myr, but SNRs like 0519−69.0 probably will not.
SNRS
The local SFHs in the subcells containing the eight SNRs in our list are plotted in Figures 4 and 6. The lifetime of an isolated 8 M ⊙ star with Z = 0.008 from Eldridge & Tout (2004) has been indicated by a dashed vertical line in all the plots for illustrative purposes. For simplicity, we have collapsed all the SFH bins at ages above 2 Gyr into a single bin at 10 Gyr. Several interesting average quantities can be calculated from the local SFHs. We have listed two such quantities in Tables 2 and 3: the average metallicity of all the stars formed in the subcell,Z * and their average aget * . These averages are always dominated by the large population of old stars in each subcell, and therefore they are irrelevant for the properties of CC SN progenitors -the values in Table 2 are merely provided for comparison with the values in Table 3 (see discussion in § 7). The average values for the entire LMC areZ * = 0.0023 andt * = 8.1 Gyr.
Core Collapse SNRs
The most salient feature of the SFHs around the four CC SNRs (Figure 4) is that they are strongly dominated by intense bursts of star formation in the recent past (t < 40 Myr). This is of course expected, and in the cases where the SNRs have been typed for their close proximity to young stellar clusters (notably, SNR N63A), it does not reveal any new information. However, the timing of these bursts and their intensity will determine the properties of the population of massive stars that can be found at each location, and hence the likelihood of each CC SN subtype. To highlight these aspects, we display the most recent bins of the SFHs associated with the CC SNRs in greater detail in Figure 5, alongside the lifetimes of isolated massive stars with Z = 0.008 from Eldridge & Tout (2004) 7 . We have convolved the three most recent SFH bins with a standard Salpeter IMF to calculate the fraction of massive stars that are exploding now as CC SNe (f CCSN ) from progenitors in three mass intervals: 8 to 12.5 M ⊙ , 12.5 to 21.5 M ⊙ and above 21.5 M ⊙ . We list these fractions in Table 2 for each of the CC SNRs. The interval cuts are the stellar masses whose lifetimes correspond to the upper edges of the first and second age bins in the SFHs: 9.4 and 18.9 Myr. We remind the reader that these values of f CCSN are calculated using isolated star models that do not take into account the potentially large effects of binarity on stellar evolution. An entirely different but also potentially serious problem comes from the fact that massive stars are notoriously difficult to study using photometry alone (Massey et al. 1995). Because StarFISH uses all the stars (not just the massive ones) in each subcell to calculate the SFR at each age, the values of f CCSN might not be severely affected by this, but to this date there has been no systematic calibration of the StarFISH results for young stellar populations. To reflect these and other caveats, we do not list error bars on the f CCSN values, which should only be regarded as approximate.
SNR N49 -The integrated SFH for the North Blue Arm region that contains SNR N49 and SNR N63A is dominated by a coherent episode of low-metallicity star formation 100 Myr ago (see § 5.1.3 and Figure 16 in HZ09), which is apparent in the corresponding panels of Figure 4. This is the reason why the values ofZ * andt * for SNRs N49 and N63A are lower than those of the other SNRs. Many parts of the North Blue Arm have also had noticeable star formation activity in the last 40 Myr, although not as intense as in the more prominent star forming regions of the LMC, 30 Dor and Constellation III. SNR N49 is in one such region, which had a moderately intense SF burst 12 Myr ago, but very little SF activity in the most recent bin centered at 6.3 Myr (see Figure 5). From these properties of the SFH, the expectation is that the majority of the CC SN progenitors in this subcell should be stars between 12.5 and 21.5 M ⊙ (see Table 2). Even taking the upper limit of the SFR in the most recent bin and the lower limit on the bin at 12 Myr, the fraction of CC SN progenitors with masses above 21.5 M ⊙ remains below 1%, with all the caveats associated to the calculated values of f CCSN . This is in good agreement with the properties of the SNR discussed in § 3.3, and it has interesting implications for the association of SNR N49 with SGR 0526−66. Magnetars are thought to be originated by very massive (> 30 M ⊙ ) stars (Gaensler et al. 2005;Figer et al. 2005;Muno et al. 2008), but such stars appear to be very scarce around SNR N49. One possibility is that the magnetar was formed elsewhere and the association is coincidental. Gaensler et al. (2001) examined this issue in detail, and came to the conclusion that the link between SNR N49 and SGR 0526−66 is considerably less convincing than those of other magnetars in SNRs. More recently, Klose et al. (2004) performed a NIR survey of the area around SNR N49 and identified a young stellar cluster (SL 463) at a projected distance of ∼ 30 pc northeast of SGR 0526−66 that could have been the birthplace of the magnetar. This cluster does fall partially on a neighboring subcell of the HZ09 map with intense SF at 6.3 Myr, consistent with the 5 to 20 Myr age estimates for SL 463 by Klose et al., and much more promising as a birthplace of massive CC progenitors (36% above 21.5 M ⊙ ). As pointed out by Klose et al., if this hypothesis is true the magnetar must have been ejected from its birthplace with a certain velocity, and should have a measurable proper motion (see their § 3.2). Another possibility is that the association between SNR N49 and SGR 0526−66 is indeed real, but not all magnetars have stellar progenitors more massive than 30 M ⊙ . SN 1987A -The SFH associated with this SNR is of particular interest because, together with the known mass of the progenitor, ∼ 20 M ⊙ , it can provide some test of the robustness of our SFH approach to CC SN progenitors. Panagia et al. (2000) conducted a detailed study of the immediate neighborhood of SN1987A within the 30 Dor region using HST data, and found a loose young cluster with an age of 12 ± 2 Myr, which they identified as the birthplace of SN1987A's progenitor. The local SFH drawn from the HZ09 map is indeed dominated by an intense SF episode 12 Myr ago, in good agreement with the results of Panagia et al. (2000). From the SFH, we expect 56% of the CC SN progenitors in this region to be stars between 12.5 and 21.5 M ⊙ (see Figure 5). This agreement is encouraging, and indicates that some information about the progenitor mass of CC SNe can be recovered from the SFH maps of HZ09. SNR N63A -Like SNR N49, SNR N63A is located in a part of the Blue Arm with SF activity in the recent past. The SFH in this subcell, however, is different from that around SNR N49 in that it is dominated by the most recent bin centered at 6.3 Myr. As a result, 70% of the CC SN progenitors in this subcell are expected to be stars more massive than 21.5 M ⊙ . With a Salpeter IMF, roughly 40% of these stars will be in turn more massive than 40 M ⊙ , which makes a Type Ib/c origin for SNR N63 plausible, as suggested by the properties of the SNR. If this were true, SNR 63A would be one of the youngest nearby Type Ib/c SNRs, and a closer examination of this object to locate the elusive compact object and study the ejecta emission in greater detail would be of the highest interest.
SNR N158A -This object is also located in 30 Dor, but in a region with even more vigorous recent SF than the neighborhood of SN 1987A. According to our estimates, 24% of the CC SN progenitors in this subcell should be stars more massive than 21.5 M ⊙ . Again, this is compatible with the relatively massive progenitor suggested by the SNR properties (see § 3.2), but unfortunately the temporal resolution of the SFH does not allow us to discriminate between a Type IIP progenitor with 20 − 25 M ⊙ ) and a Type Ib/c progenitor with > 40 M ⊙ (Chevalier 2005). From the point of view of the stellar population around SNR 158A, both hypotheses are equally plausible.
Type Ia SNRs
The local SFHs around the four Type Ia SNRs are displayed in Figure 6. Since the age of the progenitors is not known a priori, we also display the integrated SFHs in Figure 7 to provide a more intuitive picture of the makeup of the stellar populations in these locations and how they have evolved with time. With one exception (SNR N103B, see below), these SFHs are very different from those associated with the CC SNRs. The SFHs of SNRs DEM L71, 0509−67.5, and 0519−69.0 show very little (but nonzero) activity in the last 200 Myr, resulting in old and metal poor stellar populations, typical of the regions of the LMC without ongoing star formation. The SFHs of these three SNRs are also punctuated by bursts of SF at 600 Myr and 2 Gyr whose prominence varies from object to object, but this is probably more related to the global properties of the LMC (see Figure 2) than to the Type Ia SN progenitors.
Given the coincidence of the LMC disk crossing time and the upper limit for the delay time of short-lived SN Ia progenitors (180 Myr according to Aubourg et al. 2008, see § 5), it is possible to use the A+B model introduced in § 3.4 and the local SFHs to predict what fraction of Type Ia SNe will come from prompt and delayed progenitors in each subcell (f IaSN ). For the prompt component, we have calculated an average star formation rate by integrating the SFHs between 64 Myr (the minimum time necessary to produce a CO WD at the metallicity of the LMC, Dominguez et al. 1999) Table 3, and can be compared with the values obtained for the whole LMC using the integrated SFH from Figure 2: f IaSN,P rompt = 59% and f IaSN,Delayed = 41%. Even more so than in the case of CC SNe, we stress that these numbers should be considered with caution, because they make strong assumptions about the properties of Type Ia SN progenitors. In particular, increasing the upper limit to the age of the prompt component from 180 to 300 Myr results in an increase of the fraction of delayed Type Ia progenitors between 8 and 15 percentual points, depending on the SNR.
At a first glance, it is somewhat surprising that the SN Ia rates from prompt and delayed progenitors are always comparable both in the whole LMC and at the locations of the individual Type Ia SNRs. This can be easily understood by examining Figure 6 in Sullivan et al. (2006): for low values of the SFR, the specific rate of Type Ia SNe from the prompt and delayed components is very similar, and low-intensity SF has been widespread across the LMC during the last 5 Gyr (see § 4 and HZ09).
SNR DEM L71 -The exceptionally low rate of SF between 64 and 180 Myr in the subcell containing this SNR makes it the most likely object (72% probability) to be associated with a delayed Type Ia SN progenitor. Unfortunately, DEM L71 is also the oldest Type Ia SNR, and detailed information about its parent SN is hard to extract from the observations. In particular, the ejecta emission is not remarkable in any way, and seems consistent with normal Type Ia SN explosion models (BH09). The forward shock is running into material with a factor ∼ 3 density range , but this can be easily explained by inhomogeneities in the ISM. The SNR dynamics do not indicate any substantial modification of the AM by the progenitor ).
SNR N103B -Of all the SFHs associated with Type Ia SNRs that we discuss here, that of SNR N103B is without doubt the most remarkable. This SNR is in a region of the LMC bar that has seen vigorous SF activity in the recent past, with a prominent extended peak between 100 and 50 Myr (probably associated to the nearby cluster NGC 1850, Gilmozzi et al. 1994) and a more recent one 12 Myr ago. The intensity of this last peak is even larger than those associated with SN 1987A and SNR 158A in the 30 Dor region. It is not surprising that Chu & Kennicutt (1988) mistyped SNR N103B as a CC SNR before a good quality X-ray spectrum was available, based on this evident association with recent SF. This does not necessarily imply that the progenitor of SNR 103B was a young star, although the predicted fraction of prompt Ia progenitors (73%) is higher for this SNR than for any of the others. However, if the properties of the SNR itself are taken into account, the association with recent SF becomes more intriguing. Lewis et al. (2003) noted a number of similarities between the strong lateral asymmetry of SNR N103B and that of Kepler's SNR. In the case of SNR N103B, it is unclear whether this asymmetry is due to an interaction with an ISM structure (e.g., the nearby HII region DEM 84) or to some kind of CSM modified by the SN progenitor. In the Kepler SNR, however, it has been shown that the asymmetry is indeed due to interaction with a CSM modified by the Type Ia SN progenitor (see Reynolds et al. 2007, and references therein), suggesting that either the progenitor or its companion must have been relatively massive. From the shape of the SFH, a relatively young (less than 150 Myr) and metal-rich (Z = 0.008) progenitor for SNR N103B seems a likely possibility.
SNR 0509−67.5 -This SNR is in a region that HZ09 called the Northwest Void due to its conspicuous lack of recent star formation. In fact, HZ09 argue that the old and metal-poor stars in this part of the LMC are representative of the primordial stellar population of the LMC. Moreover, the SNR is in a very homogeneous region (see Figure 3), with neighboring subcells having very similar stellar populations. The closest subcells with noticeable SF activity (above 10 −3 M ⊙ yr −1 ) at t < 100 Myr are roughly 1 kpc away from the SNR. This would not be noteworthy for a generic Type Ia SNR, but thanks to the work of Rest et al. (2008) and Badenes et al. (2008b) we know that SNR 0509−67.5 was originated by an exceptionally bright Type Ia SN which synthesized ∼ 1 M ⊙ of 56 Ni. This seems to be at odds with the conventional wisdom regarding prompt and delayed Type Ia SN progenitors, which holds that exceptionally bright Type Ia SNe are usually associated with younger stellar populations (Gallagher et al. 2005), but in fact the local SFH predicts equal contributions from prompt and delayed progenitors even in this remarkably quiet part of the LMC. If this SNR did have a relatively young and massive progenitor, however, it appears to have left the AM around it relatively undisturbed .
SNR 0519−69.0 -This SNR falls in line with 0509−67.5 and DEM L71 in having a low SFH at all times, although the local stellar population appears to be more metal rich than in the other Type Ia SNRs (see Figure 7). Other than that, SNR 0519−69.0 is unremarkable both in its overall structure and dynamics and in the properties of its ejecta emission. From the local SFH, delayed Type Ia progenitors are slightly favored over prompt ones, but not significantly.
Core Collapse Supernovae
The combination of SNR studies and local SFHs that we have introduced in this paper is a promising method to further our understanding of CC SN progenitors, but it needs to be refined before it can have a significant impact on this field of research. Two major issues need to be addressed. First, the techniques used to determine the subtype of CC SNe from their SNRs are still too crude to provide consistent results, even for well observed objects like the CC SNRs in our sample. And second, the ability of tools like StarFISH to recover the SFH from mixed stellar populations at the ages that are relevant for the evolution of CC SN progenitors (below 40 Myr) using photometric data needs to be firmly established. It would be interesting to investigate the possibility of an increased temporal resolution at very early times (less than 10 Myr) in order to distinguish between Type IIL/b and Type Ib/c SN progenitors, but this might require support from spectroscopic surveys like the VLT-FLAMES (Evans et al. 2006). We hope to address these issues in future publications.
Despite the limitations of the method, we have been able to obtian some interesting results. In general, we can say that the properties of the SNRs and the local SFHs are compatible with each other, allowing for the large uncerainties discussed above. Among our target objects, the SFH around SNR N49 seems to indicate that stars more massive than 21.5 M ⊙ are scarce in this part of the LMC. This opens two possibilities: either very massive (> 30 M ⊙ ) stars are not necessary to produce magnetars, or the source SGR 0526−66 is not in fact associated with SNR N49, as suggested by (Klose et al. 2004). The SFH around SNR N63A seems compatible with a very massive SN progenitor, maybe a Type Ib/c SN, but other possibilities cannot be discarded. For SNR 158A, the temporal resolution of the HZ09 maps is too coarse to resolve the issue of the progenitor. In any case, our findings stress the importance of revisiting and reanalyzing the X-ray emission from these young CC SNRs in detail to learn more about the SN explosions that originated them.
Type Ia Supernovae
We have found that the combination of SNR studies and local SFHs is a powerful tool to explore the properties of Type Ia SN progenitors. The X-ray emission from Type Ia SNRs is well understood in terms of explosion physics, and the ability of SNR studies to recover the Type Ia SN subtype (i.e., bright vs. dim) has been demonstrated by Badenes et al. (2008b). Moreover, the combination of StarFISH and a data set like the MCPS is ideally suited to characterize the stellar population in different parts of the LMC.
Among the Type Ia SNRs that we have studied, one (SNR N103B) is found in a region that has experienced vigorous SF in the recent past, but the other three are associated to old and metal-poor stellar populations. On a first inspection, it would be tempting to establish connections between these two kinds of environments and the prompt and delayed components to Type Ia progenitors proposed by Scannapieco & Bildsten (2005), but we have found that the situation is more complex than that. Even in regions with a remarkably small amount of recent SF, it is very hard to isolate objects that arise unambiguously from delayed Type Ia progenitors. This is due to the high efficiency of the still unknown mechanisms that turn CO WDs from relatively young, intermediatemass stars into Type Ia SN progenitors of the prompt component (Mannucci et al. 2006;Pritchet et al. 2008;Maoz 2008). In order to identify with some confidence a Type Ia SNR as a product of the prompt or delayed component, it would have to be either in a region with very vigorous SF or in a pristine region with no measurable SF activity in the appropriate range of ages. Since even elliptical galaxies appear to have some residual SF during their entire lifetimes (Kaviraj et al. 2008), isolating the delayed Type Ia progenitors in the local universe to study their properties in detail might be a very challenging task, unless a Type Ia SN is found in a nearby globular cluster (Shara & Hurley 2002).
Our results allow us to test specific theoretical predictions about Type Ia progenitors, like the claim by Kobayashi et al. (1998) that the Type Ia SN rate should be very low for metallicities lower than a tenth of the solar value. This is based on the so-called 'accretion wind' scenario for single-degenerate Type Ia progenitors, which requires a minimum opacity in the material transferred from the companion to the WD (Hachisu et al. 1996). The average stellar metallicity is close to this value or even much lower around three of the four Type Ia SNRs that we have examined, which seems hard to reconcile with the results of Kobayashi et al., although we stress that all the regions that we examined do contain a small number of stars with higher metallicities. Similar concerns about this prediction and its implications have been raised by Prieto et al. (2008), who found several Type Ia SNe in low-metallicity dwarf galaxies. The accretion wind scenario also makes strong predictions about the shape of the CSM around Type Ia progenitors that are not substantiated by the dynamics of known Type Ia SNRs . In this context, an interesting possibility is opened by the recent work of Badenes et al. (2008a), which allows one to make direct measurements of the metallicity of Type Ia SN progenitors using Mn and Cr lines in the X-ray spectra of young SNRs. If this technique could be applied to the LMC SNRs, we would be able to contrast the results with the properties of the stellar populations, and test theoretical ideas about the role of metallicity in different kinds of Type Ia SNe (e.g. Timmes et al. 2003).
Two of the SNRs we have examined have remarkable properties with important implications for Type Ia SN progenitors. The unusual morphology of SNR N103B (see Lewis et al. 2003, and references therein), which is strongly suggestive of some kind of CSM interaction, has become even more noteworthy in light of the vigorous recent SF revealed by the local SFH. It appears that SNR N103B might be a member of an emerging class of Type Ia SNRs with CSM interaction that could be associated with relatively young and massive progenitors that lose an appreciable amount of mass before exploding as Type Ia SNe. This class would include the Kepler SNR (Reynolds et al. 2007) in our Galaxy and other LMC SNRs ), but the local SFHs around these objects should be examined to confirm this possibility. Evident signs of CSM interaction, however, cannot be found in other well studied Type Ia objects like Tycho and SN 1006 in our own Galaxy or the other three LMC SNRs that we have analyzed here , indicating that a majority of Type Ia progenitors do not modify their surroundings in a noticeable way. Since it is unlikely that all these other objects have had progenitors from the delayed component, we are left with two possibilities: either the amount of mass loss during the pre-SN evolution of prompt Type Ia progenitors has a large dynamic range or there is more than one way to produce Type Ia SNe with short delay times.
The properties of SNR 0509−67.5 are also remarkable, for entirely different reasons. Rest et al. (2008) found ∆m 15 < 0.9 for this SN, which translates into to a V magnitude at maximum light close to −19.5 (Phillips 1993). Yet, the SN exploded in a large region of the LMC with very little SF in the recent past (see Figure 3). The stars in the subcell that contains this SNR are on average very old (t * = 7.9 Gyr) and metal-poor (Z * = 0.0014). Gallagher et al. (2008) do find some relatively bright Type Ia SNe associated with old stellar populations, but all their objects with peak V magnitude above −19 and ages above 5 Gyr have large error bars on the age axis (see their Figure 5). Thus, SNR 0509−67.5 is probably the first bona fide example of an exceptionally bright Type Ia SN associated with an old stellar population. We note that our measurement oft * should be very reliable, because it has not been derived from a luminosity-weighted spectrum. It is important to stress that these bulk properties of the stellar population around SNR 0509−67.5 do not preclude a relatively young progenitor for this object. During the age range that we have adopted for prompt Type Ia progenitors, 2.1 × 10 4 M ⊙ of stars were formed in the subcell that contains SNR 0509−67.5. With a Salpeter IMF, roughly 10% of this mass is in the 4 to 6 M ⊙ range (the ZAMS masses that give CO WDs on timescales shorter than 180 Myr according to Dominguez et al. 1999), which results in a few hundred CO WDs from young stars. This number may seem small, but observational constraints on the percentage of CO WDs that eventually explode as SN Ia are high (2 to 40 % according to Maoz 2008), making a prompt progenitor for SNR 0509−67.5 a perfectly reasonable possibility. The fact that an object like SNR 0509−67.5 appears in a sample of only four SNRs implies that bright Type Ia SNe in old stellar populations may not be an exceptional occurrence, which should be taken into account when examining the contribution of bright and dim Type Ia SNe in cosmological studies.
Our results underline the dangers of trying to understand the behavior of Type Ia SN progenitors by studying only the bulk properties of unresolved stellar populations in distant galaxies. If the LMC had been a distant Type Ia SN host, two objects with such radically different SFHs as SNRs N103B and 0509−67.5 would have been assigned the same age and metallicity. Even average quantities obtained from resolved stellar populations liket * andZ * can be misleading if they are used by themselves to characterize the properties of SN progenitors -compare the values for CC and Type Ia SNRs from Tables 2 and 3.
CONCLUSIONS
In this paper, we have presented the first systematic study of the stellar populations around CC and Type Ia SNRs in the LMC. Our ultimate goal is to use all the available information on the X-ray emitting SNRs and the resolved stellar populations of the Magellanic Clouds to improve our understanding of CC and Type Ia SN progenitors, their final evolutionary stages, the SN explosions that mark their demise, and the aftermath of these explosions. In that broad context, this work only represents a first exploration of the many possibilities that are opened by recent theoretical and observational advances in both SNR research and stellar population studies. We plan to pursue this line of research in the future, increasing the sample of objects and refining the techniques that we have presented here.
We have found that the local SFHs around the CC SNRs in our sample (N49, SN 1987A, N63A, and N158A) are always dominated by significant episodes of SF in the recent past (t < 40 Myr), as expected from previous observational and theoretical work. The timing and intensity of these SF episodes can provide interesting constraints on the masses of CC SN progenitors, but more work is needed to explore the full potential of this method.
The local SFHs have also allowed us to study the ages and metallicities of the stellar populations around our target Type Ia SNRs (DEM L71, N103B, 0509−67.5, and 0519−69.0) in great detail. We have found that Type Ia SNe explode in a variety of environments, ranging from old and metal-poor populations to sites with vigorous SF in the recent past. Using the two-component model proposed by Scannapieco & Bildsten (2005), we have explored the relationship between specific properties of Type Ia SNe and their parent stellar populations. We have seen that extremely bright Type Ia SNe can explode very far away from any significant star formation activity (SNR 0509−67.5), and that Type Ia SNe associated with young stellar populations might sometimes experience significant mass-loss before they explode (SNR N103B). Recent studies of extragalactic Type Ia SN rates and our own findings suggest that reality is probably too complex to be explained with the popular two-component progenitor model. If this is so, high-quality SFHs for Type Ia SNRs obtained from resolved stellar populations like the ones we present here should provide an excellent observational constraint on new ideas about Type Ia progenitors.
This work has benefited from many conversations with a large number of colleagues, including Steve Bickerton, Jeremy Goodman, Jim Gunn, Jack Hughes, Raul Jiménez, Vicky Kaspi, Dan Maoz, Maryam Modjaz, Evan Scannapieco, Jerry Sellwood, and Kris Stanek. We are also grateful to the referee, Stephen Smartt, for many helpful suggestions that improved the quality of our manuscript. Support for this work was provided by NASA through Chandra Postdoctoral Fellowship Award PF6-70046 issued by the Chandra X-ray Observatory Center, which is operated by the Smitsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-0360. DZ acknowledges support from NASA LTSA grant NNG05GE82G. JLP is supported by NSF grant AST-0707982. coordinates. For more details on the SNR names, see Williams et al. (1999). b From Williams et al. (1999). c Ages estimated from SNR dynamics are subject to substantial uncertainties. See text for details. d The age from the light echo dynamics is 400 ± 120 yr (Rest et al. 2005), but the spectral and dynamical properties of the SNR, together with some historical considerations, constrain the value much more, see discussion in § 5.3 of Badenes et al. (2008b). | 2009-05-21T18:10:52.000Z | 2009-02-16T00:00:00.000 | {
"year": 2009,
"sha1": "81298847d25a28858c2b393d43e79abadc43fca5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0902.2787",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "81298847d25a28858c2b393d43e79abadc43fca5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245354129 | pes2o/s2orc | v3-fos-license | Binary Bose-Einstein condensates in a disordered time-dependent potential
We study the non-equilibrium evolution of binary Bose-Einstein condensates in the presence of weak random potential with a Gaussian correlation function using the time-dependent perturbation theory. We apply this theory to construct a closed set of equations that highlight the role of the spectacular interplay between the disorder and the interspecies interactions in the time evolution of the density induced by disorder in each component. It is found that this latter increases with time favoring localization of both species. The time scale at which the theory remains valid depends on the respective system parameters. We show analytically and numerically that such a system supports a steady state that periodically changing during its time propagation. The obtained dynamical corrections indicate that disorder may transform the system into a stationary out-of-equilibrium states. Understanding this time evolution is pivotal for the realization of Floquet condensates.
We study the non-equilibrium evolution of binary Bose-Einstein condensates in the presence of weak random potential with a Gaussian correlation function using the time-dependent perturbation theory. We apply this theory to construct a closed set of equations that highlight the role of the spectacular interplay between the disorder and the interspecies interactions in the time evolution of the density induced by disorder in each component. It is found that this latter increases with time favoring localization of both species. The time scale at which the theory remains valid depends on the respective system parameters. We show analytically and numerically that such a system supports a steady state that periodically changing during its time propagation. The obtained dynamical corrections indicate that disorder may transform the system into a stationary out-of-equilibrium states. Understanding this time evolution is pivotal for the realization of Floquet condensates.
I. INTRODUCTION
Ultracold Bose mixtures of atomic gases offer unprecedented control tools, opening promising new avenues for the investigation of Bose-Einstein condensate (BEC) in a disordered environment [1,2]. Disordered binay BECs present rich physics not encountered in a single component condensate due the intriguing interplay of quantum fluctuations induced by intra-and interspecies interactions and disorder effects. Such dirty Bose mixtures could be regarded as a feasible simulator to analyze plethora of novel quantum phenomena such as supersolidity, and quantum glasses.
Dilute Bose-Bose gases in a weak disorder potential has recently undergone a resurgence due to their fascinating properties (see e.g [3][4][5][6]). One of the most amazing features arising from the presence of the disorder in Bose mixture quantum gases is the modification of the miscibility criterion and the dramatic phase separation between the two species [1].
The considerable interest in studying the dynamics of disordered BEC driven out-of-equilibrium by slow (adiabatic) or sudden (quenched) changes to system parameters such as the scattering length has been boosted by remarkable advances in the tunability of ultracold atomic gases [7][8][9][10][11]. Non-equilibrium evolution of BEC offers the unique opportunity to explore strongly correlated systems and transport in realistic physical systems. Chen et al. [12] have shown that for a weakly non-equilibrium disordered Bose gas under a quantum quench in the interaction, the disorder can substantially destroy superfluidity more than the condensate leading to the so-called dynamical Bose glass. Moreover, it has been revealed that externally controlled spatiotemporal periodic drive constitutes an excellent platform for creating novel nonequilibrium * a.boudjemaa@univ-chlef.dz states of quantum matter [13][14][15]. Recent study demonstrates that the condensate deformation is a signature of the non-equilibrium feature of steady states of a Bose gas in a temporally controlled weak disorder [16]. Quite recently, experimental realization of ultracold bosonic gases in dynamic disorder with controlled correlation time have been reported in Ref. [17], where the microscopic origin of friction and dissipation has been well illustrated. There has been also an extensive amount of work addressing dynamics of bosons in disordered optical lattices (see e.g. [8,[18][19][20][21][22][23][24]).
Motivated by the above experimental and theoretical works, we investigate in this paper the non-equilibrium evolution of homogeneous Bose-Bose mixtures subjected to a time-dependent disordered potential with a Gaussian correlation function. To this end, we use the time-dependent perturbation theory. The perturbation method has proven to be a very useful and powerful tool to capture the main features of both equilibrium and nonequilibrium disordered single BECs [1,16,[25][26][27][28][29][30].
We start with the equilibrium case and analyze some ground-state and thermodynamic aspects of disordered mixtures. We look in particular at how the interplay of a correlated disorder (i.e. non-zero correlation length) and interspecies interactions affect the condensates deformation and their equations of state (EoS). Our analysis reveal that the condensates remain robust when the disorder correlation length is larger than the healing length regardless the strength of the interspecies interaction and of the disorder potential. Comparison with our recent findings for Bose mixtures with white-noise potential [1] and against other predictions such as the Huang-Meng predictions [31,32] suggests the importance of the disorder correlation length in the localization process.
Furthermore, we study the dynamical properties of two BECs in a Gaussian disorder potential with time-periodic driving using the aforementioned time-dependent perturbation theory. Periodically-driven quantum systems have gained tremendous interest recently owing to the possi-bility of Floquet engineering (see for review [33]). In the field of ultracold quantum gases, this concept could offer powerful techniques to reach novel phase transitions in condensed matter (see [33][34][35][36][37][38][39][40][41][42][43] and references therein). We show that our theory predicts an oscillatory behavior of the condensates deformation during the time evolution. It is found in addition that disorder drives the growth of the disorder fraction with time, reducing the condensed fraction in each species. Therefore, the time scale for reaching a sizeable dynamical depletion in principle limits the validity of the present perturbation theory Interestingly, a long time analysis predicts the existence of stationary states resembling to the Floquet states due to the combination of many-body effects, the drive frequency and disorder correlations.
The rest of the paper is organized as follows. In section II, we review the various formulas describing equilibrium Bose mixtures for arbitrary disorder potential. We then restrict ourselves to symmetric mixture with Gaussiancorrelated disorder. The results for asymmetric mixtures are shown in Appendix. Section III is the main section of this paper. We introduce the time-dependent perturbative theory for disorder Bose mixtures. We calculate the time-dependent condensate deformation due to the disorder. Many appealing issues are also discussed. Finally, in section IV we present conclusions and lessons learned.
II. EQUILIBRIUM MIXTURE
Consider weakly interacting homogeneous twocomponent BECs with equal masses, m 1 = m 2 = m, subjected to a weak random potential U (r), labeling the components by the index j. At zero temperature, the system is described by the coupled Gross-Pitaevskii equations (GPE): where Φ j is wavefunction of each condensate, j = 3 − j, µ j is the chemical potential of each condensate, g j = 4πh 2 a j /m and g 12 = g 21 = 4πh 2 a 12 /m with a j and a 12 being the intraspecies and the interspecies scattering lengths, respectively. The disorder potential is assumed to have vanishing ensemble averages U (r) = 0 and a finite correlation of the form In what follows, we restrict ourselves to the case of a Gaussian correlation with the Fourier transform [28,[44][45][46][47]] where R 0 is the disorder strength with dimension (energy) 2 × (length) 3 and σ is the disorder correlation length.
For weak disorder, Eq.(1) can be solved using straightforward perturbation theory in powers of U using the expansion [1] where the index i in the real valued functions Φ (i) (r) signals the i-th order contribution with respect to the disorder potential. They can be determined by inserting the perturbation series (3) into the Eq.(1) and by collecting the terms up to U 2 .
A. Glassy fraction
The condensate fluctuations (or condensate deformation) due to disorder, known as glassy fraction, can be given as the variance of the wavefunction n eq Rj = Φ (1)2 j (r) +· · · . Working in momentum space, the glassy fraction reads [1]: where E k = E k + 2g j n j E k + 2g j n j − 4g 2 12 n j n j , and R(k) is the disorder correlation.
Let us assume a symmetric mixture where a 1 = a 2 = a and n 1 = n 2 = n. Performing the integral (4), the glassy fraction turns out to be given: where the disorder function reads where erfc(x) is the complementary error function, ξ + = ξ/ δa + is the extended healing length for a symmetric mixture with short-range interactions, ξ =h/ √ 2mng, δa + = 1 + a 12 /a, and ℓ L = 4πh 4 / m 2 R 0 accounts for the Larkin length which is associated with the pinning energy due to the disorder [16,48,49]. For σ → 0, f (0) = √ 2, thus, the results of binary BECs with a weak deltacorrelated disorder is recovered [1]. For σ → 0 and a 12 = 0, one can reproduce the seminal Huang-Meng findings for a dirty single BEC [31].
The behavior of the function (6) is shown in Fig.1.a. As expected, the function f is decreasing with the disorder correlation length σ/ξ regardless of the strength of interspecies interactions indicating that the condensate depletion due to the disorder effects is supressed for σ ≫ ξ. One might explain this delocalization as the results of a screening of disorder by the interspecies interaction. The same situation takes place in single dirty dipolar and nondipolar BECs [28,30,44,45,50]. For fixed σ, n eq R is decreasing with the interspecies interactions a 12 /a.
The validity of the present perturbation theory requires to have the condensate depletion due to the disorder much smaller than the total density n eq R ≪ n. This implies that to have a weak disorder potential, the following condition must be fulfilled Equation (7) is a natural extension of the result of [16].
B. Equation of state
The EoS in second-order of the disorder strength is given by [1] In Eq.(8), higher order terms in g 12 were omitted. Upon calculating the integral (8), for the spacial case of n 1 = n 2 = n, and a 1 = a 2 = a, we find for the EoS where the disorder functions read e (σ/ξ±) 2 erfc σ ξ± , ξ − = ξ/δa − , and δa − = 1 − a 12 /a. In the limit σ → 0, one has: which is in agreement with our recent result for equilibrium Bose mixtures with uncorrelated disorder [1]. Evidently, Eq. (9) shows that the EoS increases linearly with the disorder strength R 0 . Figure 1.b shows that disorder function h is lowering with σ/ξ and rising with a 12 /a. This signals that the interplay of the disorder effects and interspecies interaction could modify the behavior of the EoS of Bose mixtures.
C. Sound velocity
Corrections to the sound velocity of each component due to the disorder fluctuations are given by [46]: In the case of a balanced mixture a 1 = a 2 = a and n 1 = n 2 = n, we find after a straightforward calculation: where c s0 = gn/m is the zeroth-order sound velocity and the disorder function S(δa ± , σ/ξ±) is given as: which has practically the same behavior as the function h as is displayed Fig.1.c.
III. NON-EQUILIBRIUM EVOLUTION
We consider two weakly interacting ultracold Bose gases, subjected to a weak random potential U (r, t) = u(r)F (t), such that F (0) = 0 and 0 ≤ F (t) ≤ 1. The system evolution at t ≥ 0 is described by the coupled time-dependent GP equations which can be written as : + g j |Φ j (r, t)| 2 + g 12 |Φj(r, t)| 2 Φ j (r, t).
For sufficiently small u(r), the system can be treated perturbatively. Therefore, we can write the wavefunctions as: where Φ j (r, 0) = 0 for α ≥ 1. The particle densities n j = |Φ j (r) (0) | 2 determine the chemical potentials of the system in its equilibrium ground-state: µ 0j = g j n j + g 12 nj.
The condendate deformation due to the disorder potential, can be given by Using the perturbative expansion (15) up to secondorder, and assuming that the disorder have a vanishing ensemble averages the deformation of each BEC becomes: The first-order coupled equations follow from Eq.(14) read: In Fourier space Eqs. (19) turn out to be given as: wherehω k = E k is the kinetic energy. Here we used the Laplace transform F (s) = , wherehΩ kj = hω k (hω k + 2g j n j ) is the standard dispersion relation for a single BEC. Using the inverse Laplace transform we get: where and K j± (k, t) = i cos(Ω kj± t) + ω kj Ω kj± sin(Ω kj± t) where the Bogoliubov spectrum of two-component BECs is given as (see e.g [51] and references therein): with c 2 s± = c (0)2 s1 /2 1 +μ ± (1 −μ) 2 + 4∆ −1μ being the sound velocities in the density (c s− ) and spin (c s+ ) channels,μ j = n j g j /n j g j , and ∆ = g j g j /g 2 12 . In the limit k → 0, the total dispersion is phonon-like Ω k± = c s± k.
Using the fact that Φ (1) * j (k, t) = 0, the timedependent disorder densities (18) take the form: (28) To exemplify outrightly the form of the density (28), let us suppose a time-periodic disorder potential which is abruptly switched on at time t = 0 where ω is the external frequency. For t = (2j + 1)π/ω with j being an integer, the function F (t) reaches its maximum (F (t) = 1). Therefore, the system earns a new stationary state as we shall see hereafter.
For a symmetric mixture where a 1 = a 2 = a and n 1 = n 2 = n, the Boboliubov spectrum (27) reduces to the following dimensionless form: Ω k± = Ω 0 (kξ) (kξ) 2 + 2(1 ± a 12 /a), where Ω 0 = ng/h is the inverse characteristic mean-field time scale. It is clearly seen that for a 12 /a > 1, the spectrum associated with the density channel Ω k− becomes complex leading to destabilize the system. While for a 12 /a < −1, the spectrum Ω k+ may become imaginary and thus, the mxiture would be destabilized. Inserting Ω k± and Eq.(26) into Eq. (28), and keeping in mind that terms associated with Ω k− cancel, one finds for n R (t): This equation tells us that once the depletion due to disorder is known at t = 0, both the condensed density n − n R and n R can be calculated in a somehow simpler way at time t > 0. For a given k and ω > 0, the density (30) is peaked "resonant" at frequencies Ω k+ = ω (i.e. when the external frequency nearly matches the eigenfrequencies). In terms of momenta this condition yields For k ≥ k res , the disorder depletion becomes very large, n R (t)/n ≫ 1, indicating that the perturbation theory is no longer valid in such an unstable regime despite the fact that the disorder is naively weak. Therefore, the intuitive stability criterion reads k < k res . In order to substantiate the relevance of the above time-dependent perturbative mean-field approach for laboratory experiments, we consider the 87 Rb-87 Rb mixture in two different internal states, but our theory can be readily generalized to other mixtures. The scattering lengths and the densities are chosen to a 1 = a 2 = a = 95.44 a 0 [52] with a 0 being the Bohr radius, and n 1 = n 2 = n = 10 21 m −3 , respectively, which are sufficient to ensure that the system meets the requirement of weakly interacting gas, √ na 3 ≃ 10 −2 ≪ 1. The interspcies scattering length a 12 which can be adjusted via Feshbach resonance is selected in such a way that the phase-separated condition is fulfilled throughout the dynamics. The disorder strength is fixed to be R ′ = 0.5 which gives n R /n < ∼ 1%, ensuring the sufficient criterion for the weak disorder regime. We employ various disorder driving frequencies and correlation lengths.
The numerical solutions of Eq.(30) is shown in Fig.2. One can clearly identify two phases of evolution: In phase I, Ω 0 t < ∼ 4, although the density n R (t) is somehow low, the dynamics follows an exponential growth due to the considerable effect of both quasiparticles and disorder onto the two BECs. In region II, Ω 0 t > 4, we observe that as the disorder evolves in time, the glassy fraction increases and exhibits sinusoidal oscillations signaling that the system being completely depleted at long time. This can be attributed to the motion of the Gaussian disorder which may create elementary excitations (i.e. the Bogoliubov phonons at low momenta/free particle in the high-energy regime) leading to enhance the disorder depletion. Therefore, whatever the strength and the frequency of the disordered potential, the condensates are localized since n R (t) extends to infinity at long times. In this region, the dynamics slightly slows down without any saturation and it presumably follows another growth law. It is worth noticing that a very similar change in behavior has been observed in the dynamics of periodically-driven BEC in a shaken 1D lattice [43].
It is clearly visible that in the case of σ < ξ, the timedependent glassy fraction is increasing with decreasing the interspecies interactions (see Fig.2.a). Whereas for σ > ξ, n R (t) varies with small oscillations and remains almost insensitive to a 12 /a which means that the disorder density is protected against interspecies interactions effects during its time evolution (see Fig.2.b). Furthermore, as the healing length ξ decreases, the chemical potential rises and the density of two BECs n is growed. In this situation the low-lying excitations is poorly affected by the disorder time evolution, gives rise to lower the glassy fraction.
In Fig.3 we present the time evolution of the disorder fraction as a function of the relevant parameters. We see that at short times Ω 0 t < ∼ 4 the atoms are almost delocalized i.e. n R (t) is vanishingly small. Whereas, n R (t) is increasing as the time goes on which implies a possibility for reducing the condensed fraction whatever the values of σ/ξ, ω/Ω 0 , and a 12 /a. The dynamics slows down significantly for small external frequencies ω < Ω 0 , relatively strong interspecies interactions a 12 > ∼ 0.5a and for large disorder correlation σ > ξ as is displayed in Fig.3.a, b and c. Figure 3.a and c depicts also that the oscillation strength of the condensate deformation strongly depends on σ/ξ and ω/Ω 0 .
Most noteworthy, for fixed σ > ξ and varying the ratio ω/Ω 0 , the density n R (t) remains practically constant in time for Ω 0 t > ∼ 10, apart from tenuous wigglings (see Fig.3.c) which could be an indicator of the existence of stationary Floquet condensates [33,34]. Frankly speaking, a qualitative analysis of these Floquet states in the presence of such periodic perturbations requires further thoughts.
This equation shows that the time-dependence cancels meaning that the state hardly moves over such a time interval. The leading term in Eq.(31) represents the equilibrium disorder fluctuations, while the subleading terms are of dynamical origin revealing the non-equilibrium feature of the mixture as is shown in Fig.4 (dotted and dashed lines). It is clear that the stationary depletion due to the disorder diverges from the equilibrium one in particular for a small correlation length regardless of the values of the interspecies interactions. On the other hand, after the adiabatic introduction of the disorder followed by the adiabatic switchoff i.e. t = π/ω and for ω → 0, the stationary depletion (31) becomes close to the equilibrium state, namely n R = (n/8π 3h2 ) dk[R(k)ω 2 k /Ω 4 k+ ] = n eq R as is depicted in Fig.4 (see dotted and solid lines). Conversly for ω → ∞, one has from Eq.(31), n R (ω) = π 3/2 (ξ + /ℓ L )(Ω 0 /4ω) 2 (ξ + /σ) 3 . Such a condensate deformation is a signature of the nonequilibrium property of steady states of BEC in a time-dependent disorder potential.
IV. CONCLUSIONS
We investigated the non-equilibrium evolution of binary BECs subjected to time-dependent random potential with a Gaussian correlation function. To this end, we applied the time-dependent perturbative mean-field theory which allows us to reveal the spectacular interplay of the disorder and the interspecies interactions in the non-equilibrium regime. The theory assumes weak interactions and weak disorder, hence it remains valid provided the depletion remains small throughout the full subsequent dynamics.
We first shed new light on our understanding of equilibrium process and also established some useful formulas for the glassy fraction and the EoS for both symmet-ric and asymmetric (see Appendix) Bose mixtures. We pointed out that for a large disorder correlation length, the localization of bosons is suppressed owing to the screening of the random potential by the interaction.
In the case of a Gaussian disorder potential with timeperiodic driving, we showed that the complex combination of atomic interactions, the disorder potential, and time-periodic perturbations may uncover new phenomena in dirty Bose mixtures. The disorder fluctuations grow with time and exhibits an oscillating character, its magnitude strongly depends on the system parameters. To date, there is no experimental work confirming this oscillating character of the condensate deformation. We found that at short time such an evolution follows an exponential growth law while at larger times where the theory fails, the dynamics is characterized by another law. Among the main results emerging from our study is that even though the disorder is naively weak, it could have dramatic effects on the localization of atoms during the time evolution of the system. The present analysis revealed also the occurence of a stationary state due to the crucial role played by the drive frequency in the limit of large disorder correlation length. We conjecture the existence of specific parameters enabling one to transform dynamic BECs into Floquet condensates.
Our time-dependent results can readily be extended to the case of trapped mixtures under the assumption that the disorder correlation length should be much smaller than the spatial extent of the atomic cloud [16,45]. We believe that our predictions open up new perspectives for an experimental demonstration of the peculiar interplay between the disorder and interaction in the nonequilibrium regime. | 2021-12-22T06:16:57.845Z | 2021-12-20T00:00:00.000 | {
"year": 2021,
"sha1": "e49206afec83dcafcf196cb9aa2cadf3394cfac8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "45d2966c6f3f23d0d7045257aa8b983253b8d103",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
18068704 | pes2o/s2orc | v3-fos-license | Visualization of polysaccharides in the cuticle of oligochaeta by the tris 1-aziridinyl phosphine oxide method. Demonstration of 62.5 and 185 Angstrom periodicities in cuticular fibers.
The cuticle of oligochaets (worms) is a fibrous structure which overlies the epidermal cells (10, 13) . Cuticular fibers, as evidenced by wide-angle X-ray diffraction pattern (1-4, 7, 25), amino acid analysis (18, 36, 38), optical rotation (19), and collagenase and trypsin action (19), are shown to contain collagen . Noncuticular collagen of oligochaets in the electron microscope image has 560 A periodicities (6, 33), but until now no report has demonstrated the presence of these periodicities in the cuticular collagen (6, 37) . Cuticular fibers are composed of approximately 80% protein and 20%o nonnitrogenous carbohydrates (31) . Maser and Rice (20) have suggested that oligochaet cuticular collagen contains dimers of tropocollagen bound by carbohydrates which obscure the ultrastructural pattern by preventing protein side groups from reaction with heavy metals . The significance of carbohydrates bound to cuticular fibers in oligochaets has not, however, been experimentally tested . Nor have any of the molecular models proposed by chemists for the collagen-carbohydrate linkage been morphologically verified . Cuticular fibers fixed by standard chemical methods are uniformly dense in electron micrographs, and no distribution of carbohydrates can be inferred from them . Since tris I-aziridinyl phosphine oxide (TAPO) has a strong chemical affinity for polysaccharhides (11, 39), we adopted the method of prefixation BRIEF NOTES
INTRODUCTION
The cuticle of oligochaets (worms) is a fibrous structure which overlies the epidermal cells (10,13) . Cuticular fibers, as evidenced by wide-angle X-ray diffraction pattern (1-4, 7, 25), amino acid analysis (18,36,38), optical rotation (19), and collagenase and trypsin action (19), are shown to contain collagen . Noncuticular collagen of oligochaets in the electron microscope image has 560 A periodicities (6,33), but until now no report has demonstrated the presence of these periodicities in the cuticular collagen (6,37) . Cuticular fibers are composed of approximately 80% protein and 20%o nonnitrogenous carbohydrates (31) . Maser and Rice (20) have suggested that oligochaet cuticular collagen contains dimers of tropocollagen bound by carbohydrates which obscure the ultrastructural pattern by preventing protein side groups from reaction with heavy metals . The significance of carbohydrates bound to cuticular fibers in oligochaets has not, however, been experimentally tested . Nor have any of the molecular models proposed by chemists for the collagen-carbohydrate linkage been morphologically verified . Cuticular fibers fixed by standard chemical methods are uniformly dense in electron micrographs, and no distribution of carbohydrates can be inferred from them .
Since tris I-aziridinyl phosphine oxide (TAPO) has a strong chemical affinity for polysaccharhides (11,39), we adopted the method of prefixation BRIEF NOTES with TAPO mixtures (11, and unpublished results) for the study of the distribution of carbohydrates in the cuticle of oligochaets . The TAPO method visualized more structural components of the cuticle than did standard methods ; the structural components visualized by TAPO were analyzed by cytochemical techniques . Lead and uranyl staining and cytochemical techniques revealed a crossbanding in the cuticular fibers which had a periodicity compatible with that of the noncuticular oligochaet collagen .
MATERIALS AND METHODS
The following species were used : Tubifex tubifex (Mull .), Eisenia foetida (Sav .), Lumbricus terrestris (L .), Enchytreus albidus (Henle), and Octolasium complanatum (Duges) . The animals were selected to represent both aquatic and terrestrial biotopes and to cover the greatest possible range of variations in size. Where possible, animals belonging to the same species were collected from different biotopes . Apparently healthy specimens were divided before fixation into three groups, the first being anesthetized with magnesium chloride (9), the second with 7% ethanol, and the third by cold (0°C) . Anesthetized worms, either n toto or minced with sharp scissors, were prefixed in one of the following prefixatives dissolved in phosphate buffer (0 .1 M ; pH 7.2) : (a) 2.5% glutaraldehyde (28) ; (b) 1 % acrolein ; (c) 1 % TAPO (39) (Polysciences, Inc ., Warrington, Pa .) ; (d) 1% acrolein and I % TAPO (11) . Methylene chloride was evaporated from the original solution furnished by the producer (80% TAPO and 20% methylene chloride) at 32°C THE JOURNAL OF CELL BIOLOGY • VOLUME 57, 1973 • pages 85 9 -867 8.59 under low pressure (20 mm Hg) ; when bubbling ceased, bidistilled water was added to make a 5% TAPO solution . This solution was stored at 0°C and used to prepare the indicated fixatives. The times of prefixation were : for solution (a) 2h ; for solutions (b), (c), and (d) from 2 to 12 h. The mixture of acrolein and TAPO was left at room temperature for 1 h before use . All prefixed worms were postfixed in 4% unbuffered Os04 for 12 h . One portion of the worms was fixed exclusively in 1 .3% OS04 buffered with s-collidine (8) . All fixed worms were treated overnight with 0 .5% uranyl acetate solution in Michaelis buffer (pH 5.8) at room temperature (12)(13)(14), then embedded in Vestopal W (27) (lot 68, Madame Martin Jaeger, Geneve, Switzerland) . Method (a) and fixation with osmium tetroxide alone are referred to in a further text as classical fixation methods . Ultrathin sections were stained with lead citrate and uranyl acetate at room temperature (24,34,35) or with 2.5% uranyl acetate in absolute ethanol at 60°C for 3-6 h (15) . Some sections were oxidized with periodic acid (E. Merck AG, Darm . stadt, Germany) for 25 min, then treated with thiosemicarbazide (E . Merck AG) for a period from 12 to 72 h. The final reaction was carried out either with vapors of osmium tetroxide for 2 h according to Seligman et al. (30) or with silver proteinate (E . Merck AG) for 30 min according to Thi6ry (32) . No information is available on the influence of TAPO on the course of Seligman's or Thiery's cytochemical reactions, since free aldehyde groups of acrolein can also give artifactual reactions with cytochemical reagents . Therefore, we carried out thiosemicarbazide treatment and silver proteinate staining after omission of oxidation with periodic acid of sections of acrolein-TAPO-osmium tetroxide-fixed material (see Table II) . The effects of the omission of reaction either with thiosemicarbazide or with silver proteinate in periodic acid-oxidized sections of similarly fixed material were also evaluated. The effects of unspecific staining with silver proteinate were evaluated at high magnifications (about X 300,000) . Procedures consisting of the omission of oxidation with periodic acid in the material fixed by classical methods do not constitute valid controls since osmium bound to tissue structure is apt to react with thiosemicarbazide and consequently give positive reaction with silver proteinate . However, in the case of acrolein-TAPO-osmium tetroxide-fixed material, no such unspecific binding of thiosemicarbazide to osmium occurred . Other controls were done by evaluating the effects of the omission of reaction either with thiosemicarbazide or with silver proteinate in sections oxidized with periodic acid (Table II) . The silver proteinate staining could be run for 20-30 min without causing unspecific staining.
Some sections were oxidized with periodic acid for 25-30 min and treated for 1 h with 1 % phosphotung- T. tubifex prefixed with glutaraldehyde and postfixed with 0s04 . The electron micrograph shows the relation between the dermal epithelial cells and the cuticle . Numbers indicate layers of the cuticle . 1, periepithelial space which after application of the glutaraldehyde-osmium tetroxide method is electron transparent ; N, fibrillar layer composed of cuticular fibers embedded in the matrix; both fibers and the matrix are of medium electron density ; 3, the epicuticle with electron-opaque cortical layer ; 4, ellipsoidal bodies of uniform electron density; 5, external filamentous layer seen here as traces of filamentous material. N, nucleus of the epithelial cell . Lead citrate, uranyl acetate staining at room temperature. X . 15,000 . stic acid (PTA) dissolved in distilled water (5,16,17,21,22) .
Siemens Elmiskop I A and 101 electron microscopes were used throughout this study . Care was taken to compensate the astigmatism of the instruments within 0 .1 µm with photographic control of the compensation . We determined the magnification of the electron microscopes using the grating replica .
RESULTS
The structure of the cuticle in the species being studied, fixed by classical methods, was compatible with a general pattern found by previous investigators (9,10,23,26) . The cuticle is composed of five layers (Figs . 1 and 2) .
Figs. 1 and 2 show the pronounced differences in the cuticular structures after fixation by classical methods and by the acrolein-TAPO-osmium tetroxide method . The differences are presented diagrammatically in Table I . It is seen that the acrolein-TAPO-osmium tetroxide method visualizes more structural components of the cuticle FIGURE 2 T . tubifex fixed by the acrolein-TAPO-osmium tetroxide method. As compared with Fig . 1, membranous organelles of epithelial cells are less clearly visible but the layers of the cuticle are better defined morphologically and are more electron opaque . No electron-transparent spaces are seen in the cuticle . The arrow indicates the junctional complex between dermal epithelial cells . After application of the acrolein-TAPOosmium tetroxide method this complex always contained electron-opaque material . Lead citrate, uranyl acetate staining at room temperature . X 17,300. than classical methods . No empty spaces are seen in the layers of the cuticle after application of the ac rolein-TAPO-osmium tetroxide method (Fig . 2) . Among structures visualized only by the acrolein-TAPO-osmium tetroxide method are : a periepithelial layer 250 A thick, closely apposed to the external leaflet of the plasma membrane of epithelial cells (Fig . 3) ; substructure of the cuticular fibers (Fig . 4) ; differentiation of ellipsoidal bodies into two populations, one containing an electron-opaque core and another with an electron-transparent core (Fig . 2), could be observed . The acrolein-TAPO-osmium tetroxide method visualized to a higher degree the structures visible also in specimens fixed by classical methods such as the filamentous matrix of the fibrillar layer and the external filamentous layer covering the ellipsoidal bodies .
Each cuticular fiber, in the acrolein-TAPOosmium tetroxide fixed material, is composed of an electron-opaque peripheral zone and an electron-transparent central core . The peripheral zone of longitudinally cut lead and uranyl-stained cuticular fibers is composed of several layers of filaments . Deeper layers of longitudinally cut cuticular fibers contain filaments regularly spaced 0 at 62 .5 f 5 A intervals . A crossbanding of the periodicities of 185 t 35 A can also be seen .
We observed species-dependent variability of the acrolein-TAPO-osmium tetroxide -fixed cuticular structures (see Table II) . The periepithelial layer was consistently observed only in T. tubifex ; 62 .5 and 185A periodicities were found reproducibly in T. tubifex, E. foetida, L . terrestris, and 0 . complanatum while in E. albidus the periodicities A were found only in 800-1,000 A thick cuticular fibers . Other structures undergoing species-dependent variability are listed in Table II . Cuticles fixed by acrolein alone (method b) and by TAPO alone (method c) were poorly preserved.
The method of Seligman et al . (30) gave comparable results in cuticles fixed by method (a) and method (d) (see Table I) . No substructure of the main cuticular components, however, could be revealed by this method .
The method of Thiery (32) gave coarse silver granules over the cuticle fixed by the classical method . Cuticles fixed by the acrolein-TAPOosmium tetroxide method and stained by Thiery's reaction showed finer silver granules when compared with those of classical methods, but the resolution of silver proteinate staining was always low . Thiery's reaction visualized the differentiation of cuticular fibers into a peripheral zone and a central core . A superficial layer of the longitudinally cut peripheral zone of cuticular fibers after silver proteinate staining (32) was irregularly covered by silver granules . Deeper sections of this zone showed crossbanding of two types (Fig . 5) (a) linearly arranged deposits of silver granules spaced at 60-65 A intervals, and (b) broad regular bands with a center-to-center distance of about 185 A . Only a small part of the structures revealed by lead and uranyl staining in the matrix of cuticular fibers fixed by the acrolein-TAPOosmium tetroxide method reacted with silver proteinate (32) ( Table I) . Among the reacting structures were filaments oriented perpendicular to the cuticular fibers, in register with their dense bands . The periphery of the ellipsoidal bodies and the external filamentous layer gave positive Thi€ry's reaction .
The results of procedures in which certain steps 8 6 2 BRIEF NOTES of the cytochemical staining were omitted in order to be able to evaluate better the results of Thiery's reaction as applied to acrolein-TAPO-osmium tetroxide-fixed cuticles are summarized in Table III . It can be seen that periodic acid oxidizes all metallic deposits, and aldehydic groups introduced by the technical procedure were sparsely dispersed only over the epicuticle . Other controls show that the majority of silver granules is bound to thiosemicarbazide, which reacted with oxidized vicglycol groups since direct staining with silver proteinate does not augment the electron density in cuticular structures .
DISCUSSION
The acrolein-TAPO-osmium tetroxide fiaxtion furnished a higher contrast and a higher resolution of cuticles than classical methods of fixation of oligochaets for electron microscopy . As well, silver proteinate reaction (32) gave a higher resolution in the acrolein-TAPO-osmium tetroxide-fixed cuticles than in those fixed by classical methods . We observed many new structural components of the cuticle in acrolein-TAPO-osmium tetroxidefixed oligochaets . A considerable part of the new structural information concerned the cuticular fibers . We observed that they were composed of an electronopaque peripheral zone and an electron-transparent central core. The possibility does exist that differences in electron density are due to inadequate penetration of the TAPO mixture into the cuticular fibers . If this were so, then subsequent cross-reactions between molecules of cuticular fibers, fixatives, and various staining reagents could in themselves lead to the differences in electron density revealed in the cuticular fibers . Since similar differences are visible in cuticular fibers also after silver proteinate reaction, which does not strongly depend on the presence of TAPO in biological structures (32), and since there is an abrupt fall of electron density at the interface between the peripheral zone and the central core, the existence of these components in vivo is not excluded . Only the application of the acrolein-TAPO-osmium tetroxide method permits the visualization of the crossbanding of cuticular fibers . Such a crossbanding has never been observed in the electron microscope, and the spatial order in cuticular fibers has been deduced only from X-ray crystallographic studies (1-4, 7, 25) .
It is difficult to compare structures evidenced in acrolein-TAPO-osmium tetroxide-fixed, lead-and uranyl-stained material with similarly fixed cuticles which are silver proteinate stained . The silver proteinate reaction revealed only a part of FIGURE 4 E . foetida fixed by acrolein-TAPO-osmium tetroxide method . Cuticular fibers, as seen in cross sections, are composed of a peripheral electron-opaque zone (arrows) and an electron-transparent core (c) . Lead citrate and uranyl acetate staining at room temperature . X 50,000 . the cuticular structures, namely, those containing polysaccharides with free vic-glycol groups . On the other hand, a majority of the cuticular structures could be revealed by lead and uranyl staining . Cuticular structures not revealed by silver proteinate staining and visible only after lead and uranyl staining may also contain polysaccharides, but their vic-glycol groups may be masked or absent .
Staining with PTA, in spite of controversies concerning its specificity (29), gives useful information .
X-ray diffraction (1-4, 7, 25) and chemical analysis (18,19,36,38) have unequivocally shown that cuticular fibers contain collagen ; the only subject for speculation was the location of the collagen in the cuticular fibers . We have not been able to furnish any data in support of a hypothesis on this location . The "insertions," (unpublished results) that unique structure which would account for the end-to-end linkage between proteins and carbohydrates, as suggested by Maser and Rice (20), are spaced by intervals that are larger than the length of the tropocollagen molecule, while periodicities of transverse banding reported in this note are too small . Only the peripheral zone of FIGURE 5 T. tubifex fixed by the acrolein-TAPO-osmium tetroxide method . Sections were oxidized for 25 min with periodic acid, left to react with thiosemicarbazide for 72 h, and then stained with silver proteinate for 30 min . Cuticular fibers (CF) cut longitudinally show the presence of transverse banding of regular periodicity . Some filaments of the matrix (small arrow) are seen to be in register with dense bands on cuticular fibers. Cross sections of cuticular fibers (large arrow) show the presence of deposits of silver granules predominantly over the peripheral zone of the fiber . Granules of silver are also seen external to the plasma membrane of the cytoplasmic process (C) . Epicuticle shows deposits of silver granules over two distinct layers (empty arrows) . Silver granules mark the periphery of ellipsoidal bodies (e) . Large accumulations of silver granules are seen just to the right of the asterisk over the external filamentous layer. X 90,000 . cuticular fibers reacted with silver proteinate . If the division of cuticular fibers into a peripheral zone and a central core reflected the in vivo state, the only possible location for collagen would be the central core . However, none of the techniques used in this study has revealed any periodicity typical for oligochaet noncuticular collagen in the central core of the cuticular fibers . It is possible that the lack of penetration of the TAPO mixtures into the central core dissallows cross-binding with heavy metals . The periodicities of the crossbanding of cuticular fibers (62 .5 and 185A) when multiplied by the factor 9 or 3 give the value of 560 A, which corresponds to the main period of the noncuticular collagen of oligochaets (6,33) . This smaller subperiodicity, however, might be explained in other We are most grateful to Professor Pietro Omodeo from the Institute of Animal Biology, University of | 2014-10-01T00:00:00.000Z | 1973-06-01T00:00:00.000 | {
"year": 1973,
"sha1": "f775be782e9a52c94a52f76442e01d882122386f",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/57/3/859/1266679/859.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f775be782e9a52c94a52f76442e01d882122386f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
225223609 | pes2o/s2orc | v3-fos-license | Linking Cultural Heritage with Cultural Tourism Development: A Way to Develop Tourism
This study presents the main ideas of sustainable cultural tourism development, a form of tourism associated with work discover and explore the culture of each region. It implies taking into account economic, environmental and socio-cultural aspects by tourism planning and management. The paper presents the historical background of the idea of sustainability, the factors that affect the sustainability of culture in tourism development. The author emphasizes the negative effects of tourism on cultural preservation that can be prevented by applying the principles of sustainable development; at the same time, propose solutions to balance economic development and cultural preservation.
Introduction
Currently, there are many types of tourism (Sultan & Banu, 2018). According to the criteria of tourism resources and tourism needs, there are natural eco-tourism types and cultural tourism types. The natural eco-tourism meets the need to explore and immerse in the nature of visitors, organized based on the attraction of natural tourism resources. Cultural tourism is organized based on the exploitation of the cultural values of the destination, meeting the tourist needs to learn about the country, people and culture of the tourist destination. In those forms of tourism, cultural tourism plays an important role in sustainable tourism development.
Sustainable tourism is the concept of visiting somewhere as a tourist and trying to make a positive impact on the environment, society, and economy (Peeters & Dubois, 2010). Tourism can involve primary transportation to the general location, local transportation, accommodations, entertainment, recreation, nourishment and shopping. It can be related to travel for leisure, business and what is called VFR (visiting friends and relatives) (Peeters & Dubois, 2010). There is now a broad consensus that tourism development should be sustainable; however, the question of how to achieve this remains an object of debate.
Sustainability is a popular trend in nowadays life, concerning development and operation, also in the tourism sector. However, there is confusion about the different meanings of sustainability and whether it can be achieved in tourism (Dwyer & Edwards, 2010). Therefore a problem arises: does sustainable development apply to cultural tourism? The purpose of this paper is to reveal the necessity of sustainable development in cultural tourism, in the conditions of both economic development and environmental protection, at the same time protecting cultural heritage. The object of this article is to present sustainable cultural tourism: the concept of sustainable tourism development and protect cultural heritage. Moreover, the author presents its own critical view on sustainable tourism development in Vietnam-the current situation and the need for change. The tasks arising from the purpose are as following: to present economic aspect, environmental aspect, and socio-cultural aspect influencing sustainable cultural tourism development. The monographic and descriptive method was applied in the paper. It is worth emphasizing that there is a large number of American and English literature on the topic. Sustainable tourism is a term often explained, described, and used in Western tourism handbooks, sometimes even as a separate publication. However, there is not much literature in Vietnam publications. It may be due to the low level of development of the tourism sector in Vietnam either due to the lack of interest from tourism management authorities, or the lack of comprehensive links between tourism management authorities and cultural management authorities in Vietnam. The other reason may be little interest in the issue of sustainability among touristic facility management and tourism activities at cultural heritage sites. Most of Vietnam the literature base on foreign bibliography and international documents introducing sustainability principles.
Concept of Sustainable Development
To explore the principles and objectives of sustainable cultural tourism development first it is necessary to define the term sustainable development. Despite the widespread acceptance of sustainable development, there remains a lack of consensus over the actual meaning of this term. It means different things to different people and can be applied to any context, including tourism. The concept of sustainability first appeared in the public sense in the report by the World Commission on Environment and Development (WCED) in 1987. The outline of sustainable development is that economic growth and environmental conservation are not only friendly but they are partners and one cannot survive without than others. The Brundtland Commission Report sustainable development as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED, 1987). WCED (1987) written: Sustainable development is one that meets the needs of the present generation without comprising the ability for future generations to meet their own needs". The modern concept of sustainable development from the WCED (1987) is also rooted in earlier ideas about sustainable forest management and twentieth-century environmental concerns. However, when the world economy developed, it has shifted its focus more towards economic development, social development, and environmental protection for future generations. It has been suggested that the term "sustainability" should be viewed as humanity's target goal of human-ecosystem equilibrium, while "sustainable development" refers to the holistic approach and temporal processes that lead us to the endpoint of sustainability (Shaker, 2015). Modern economies are endeavoring to reconcile ambitious economic development and obligations of preserving natural resources and ecosystems, as the two are usually seen as of conflicting nature. Instead of holding climate change commitments and other sustainability measures as a remedy to economic development, turning and leveraging them into market opportunities will do greater good.
The concept of sustainable development variously described as eco-development, self-sustaining development, or suspensory development. Sustainable development is based on three pillars: economic development, environmental protection, and social development. Recently the term "social development" is being replaced as the "socio-cultural development". This concept assumes a properly and consciously shaped relationship between the pillars, which are intended to ensure intra and economic, environmental, and social balance inter-generational (Meyer & Milewski, 2009).
Concept of Sustainable Tourism Development
Sustainable tourism development and sustainable development have a very close relationship. In fact, sustainable development and sustainable tourism development are both related to the environment. In tourism, the environment has a very broad connotation. It is the natural, economic, cultural, political, and social environment; is a very important factor to create diversified and unique tourism products.
Obviously, without environmental protection, the development would decline; but if there is no development, the environmental protection will fail. Therefore, we need to develop tourism but must not harm resources, not negatively affect the environment. In other words, sustainable tourism must be the development trend of the tourism industry.
Tourism has become a major economic activity within developed and developing countries, often contributing more foreign currency than traditional primary commodity exports. The growth in the tourism sector and its maturity made people concern that the resources of host countries might be exhausted. Attention has been paid to the relationship between tourism and the environment and to the problems associated with tourism expansion (Philip & Gianna, 1985).
In addition to environmentally friendly development, the concept of sustainability also includes a tourism approach that recognizes the role of the local community, the treatment of labor, and the desire to maximize the economic benefits of tourism for the local community. In other words, sustainable tourism is not only about environmental protection, but also about the long-term economic viability and social equity. Sustainable tourism cannot be separated from sustainable development.
Sustainable tourism development is defined as all forms of activities, management, and development of tourism that preserve natural, economic, and social integrity and guarantee the maintenance of natural and cultural resources. Sustainable tourism development guidelines and management practices are applicable to all forms of tourism in all types of destinations, including mass tourism and the various niche tourism segments.
In many research found that "sustainability is praised a positive approach intended to reduce the tensions and friction created by the complex interactions between the tourism industry, tourists, environment and the host communities to maintain the long term capacity and quality of both natural and human resources" (Bramwell & Lane, 1993).
Currently, there is still no consensus in the world on the concept of sustainable tourism development. Sustainable tourism is defined in a number of ways. Machado (2003) has defined sustainable tourism as: "Types of tourism that meet the current needs of tourists, the tourism industry, and the local community, but do not affect their ability to respond needs of generations to come. Tourism is economically viable but does not destroy resources on which the future of tourism depends, especially the natural environment and social structures of local communities". This definition focuses on the sustainability of the tourism forms (tourism products), but does not generally mention the sustainability of the entire tourism industry.
According to the World Travel and Tourism Council (WTTC), 1996, "Sustainable tourism is about meeting the current needs of tourists and tourism areas while ensuring the ability to meet the needs of future generations of tourism". This is a brief definition based on UNCED's definition of sustainable development. However, this definition is too general, only referring to the satisfaction of the needs of current and future visitors, not to mention the needs of local communities, ecological and diverse environment biological, etc.
According to Hens L. (1998): "Sustainable tourism requires managing all types of resources in some way so that we can meet the economic, social and aesthetic needs while maintaining cultural identity, basic ecological processes, biodiversity, and life-guarantee systems ". This definition only focuses on the management of tourism resources for sustainable tourism development.
At the 1992 United Nations Conference on Environment and Development in Rio de Janeiro, the World Tourism Organization (UNWTO) defined: "Sustainable tourism is the development of tourism activities to respond to the current needs of tourists and indigenous peoples while being concerned with the conservation and improvement of resources for future tourism development. Sustainable tourism aims to satisfy the economic, social, and aesthetic needs of human beings while maintaining cultural integrity, biodiversity, the development of ecosystems, and support systems for human life" (Luong, 2002). This definition is a bit long but contains a full range of contents, activities, and factors related to sustainable tourism. This definition has also focused on local communities, protecting the ecological environment, preserving cultural identity. In this thesis, the concept of sustainable tourism development is understood according to the connotation of the definition of the World Tourism Organization (UNWTO), in 1992. The goals of sustainable tourism according to Inskeep (1995) Thus, with the above views, it can be considered that sustainable tourism development is a branch of sustainable development in general, determined in 1987. Development by the Conference of the World Commission on Development and Environment. Sustainable tourism development is the activity of developing tourism in a specific area so that the content, form, and scale are appropriate and sustainable over time, do not degrade the environment, and do not affect the ability to support other development activities. On the contrary, the sustainability of tourism development activities is built on the foundation of success in the development of other industries, the sustainable development of the entire society.
Views on Cultural Tourism
Cultural tourism is a form of tourism based on the national cultural identity with the participation of the community to preserve and promote traditional cultural values. In addition to types of tourism such as ecotourism, medical examination and treatment, adventure tourism, educational tourism, etc. recently, cultural tourism is considered a typical product of developing countries, attracting many international tourists.
Cultural tourism mainly relies on cultural products, traditional ethnic festivals, including religious customs, beliefs, etc. to create attraction to local tourists and from all over the world.
For tourists interested in researching and exploring local culture and customs, cultural tourism is an opportunity to satisfy their needs. Most of the cultural tourism activities are closely related to the locality -a place where many cultural festivals are kept and also a place where poverty exists.
Tourists in developed countries often choose festivals of countries to organize foreign tours. Therefore, attracting tourists to cultural tourism means creating a new flow and improving the lives of local people.
In underdeveloped or developing countries, development platforms largely do not rely on large investments to create expensive tourist destinations, but rather on natural tourism resources and diversity of identity nation. These resources do not create great value for the tourism industry but contribute significantly to the development of the social community. Countries that develop strong cultural tourism are Thailand, Indonesia, Malaysia, China, and some countries in South America, etc.
Cultural tourism is a trend in many countries. This type of tourism is very suitable for the Vietnamese context, very good for the national poverty alleviation, so it must be considered as a development direction of Vietnam's tourism industry.
In Vietnam, many cultural tourism activities are organized based on regional characteristics. The program of Land Phuong Nam Festival (Folklore Festival of the Southern Delta region). Dien Bien Tourism (Northwest cultural festival combined with political events: 50 years of victory in Dien Bien Phu), Central Heritage Road (folk festival combined with visiting cultural heritage by UNESCO recognition), etc. as activities of cultural tourism, attracting many domestic and foreign tourists. Among them, the Hue Festival is considered to be the most unique cultural tourism activity in Vietnam. In addition, in Vietnam, there are currently 22 tangible heritages recognized by UNESCO as world tangible heritages such as Complex of Hue ancient capital, Hoi An ancient town, My Son Sanctuary, Thang Long Citadel, the citadel of the Ho Dynasty, etc all have become important cultural tourist destinations for domestic and foreign tourists.
Cultural Heritage
The term 'cultural heritage' has changed content considerably in recent decades, partially owing to the instruments developed by UNESCO. Cultural heritage does not end at monuments and collections of objects. It also includes traditions or living expressions inherited from our ancestors and passed on to our descendants, such as oral traditions, performing arts, social practices, rituals, festive events, knowledge, and practices concerning nature and the universe or the knowledge and skills to produce traditional crafts (UNESCO, 2004). Cultural Heritage is an expression of the ways of living developed by a community and passed on from generation to generation, including customs, practices, places, objects, artistic expressions, and values. Cultural Heritage is often expressed as either Intangible or Tangible Cultural Heritage (ICOMOS, 2002).
Chapter VII -Implementation Provisions (Articles 73 to 74) (2001), the Law on Cultural Heritage of Vietnam has provisions on cultural heritage. Cultural heritage in this Law is understood to include both intangible cultural heritages and tangible cultural heritage. They are spiritual and material products with historical, cultural, and scientific values, handed down from generation to generation. Intangible cultural heritage is a provincial product of historical, cultural, and scientific value, preserved by memory, written, handed down by word of mouth, craft, performance, and forms. Other archival and transmission, including voice, writing, literary works, art, science, oral language, folk performance, lifestyle, lifestyle, festival, craft know-how traditional work, knowledge of medicine, traditional medicine, culinary culture, traditional costumes, and other folk knowledge. Physical cultural heritage is a material product of historical, cultural, or scientific value, including historical-cultural relics, scenic spots, relics, antiques, national treasures, etc.
Vietnam's cultural heritage is a valuable asset of the community of Vietnamese ethnic groups and is a part of human cultural heritage, playing a great role in the cause of building and defending the country of the people. protect and promote the value of cultural heritages, meet the increasing cultural needs of the people, contribute to building and developing an advanced Vietnamese culture, imbued with national identity and contribute to the world cultural heritage treasure house and to strengthen the effectiveness of state management, to enhance the people's responsibility to participate in the protection and promotion of cultural heritage values.
As part of human activity, Cultural Heritage produces tangible representations of the value systems, beliefs, traditions, and lifestyles. As an essential part of the culture as a whole, the cultural heritage contains these visible and tangible traces from antiquity and today.
Relationship between Culture and Tourism
For many years in our country, there has been a very convincing lesson and experience that culture in tourism in our country is both as an orientation goal and as a view that affirms that culture is the content, the true nature of Vietnam's tourism, creating the most unique, unique and attractive of Vietnam's tourism products, contributing to building the national image in the eyes of international friends.
Tourism is a human social practice, it is formed by the organic combination of the three elements of the traveler, the tourism resource, and the travel agency. The tourist is the tourist subject, the tourism resource is the tourist subject and the tourism industry is the broker providing services to tourists. In terms of socio-cultural, tourism is a high-level cultural activity of people. Because culture is the purpose that tourism aims for, is the endogenous cause of tourism demand. No matter what the purpose of the traveling person (visit, study, study, sightseeing, relax, etc.) or by any mode (road, rail, sea, air, etc.), the end purpose the same is to satisfy their own needs, to feel and enjoy the material and spiritual values created by people in a country outside of their regular residence. In other words, tourism is human behavior with the natural and social environment in order to benefit them and is a beneficial activity to promote human intellectual development.
It is a general statement and a specific expression of the close relationship between culture and tourism is expressed through the following aspects: Culture is a unique resource of tourism (the source of raw materials to form tourism activities). When we say culture is the raw material to form tourism activities, we mean the attraction/beneficiary of the tourist. Cultural materials have two basic types: Physical culture is human creations that exist, exist in a space that can be perceived by sight, touch, such as historical and cultural relic's culture, handicrafts, tools in seed production, production, ethnic dishes, etc. An intangible culture such as festivals, art forms, behavior, communication ... According to the conception of In the tourism industry, cultural elements are classified into human resources (as opposed to natural resources such as seas, rivers, lakes, mountains, caves, etc.), namely: Historical -cultural relics; souvenir goods of national characteristics; cuisine; festival; entertaining games; customs, practices, behavior, communication; religious beliefs; literature -art. Therefore, culture is the condition and environment for tourism to arise and develop. Along with natural resources, cultural resources are one of the typical conditions for tourism development of a country, region, and locality. The value of cultural heritage: historic sites, architectural works, art forms, customs, festivals, traditional professions, etc. together with economic, political, and communal achievements associations, cultural and art establishments, museums, etc. are objects for tourists to explore, enjoy, for tourism to exploit and use. The exploitation and profit from natural resources and the construction of tourist sites reflect the intelligence and creativity of mankind. It is these resources that not only create the environment and conditions for tourism to arise and develop, but also determine the scale, type, quality and efficiency of tourism activities of a country, a region, a locality.
The relationship between tourism and culture is also manifested through behavior, ethics in service, or in tourism business transactions. The essence of the relationship between culture and business in general and tourism in particular (or the role of culture in economic development) has been affirmed. In other words, business behavior to be successful must be done culturally. Can be called collectively as business art or business culture.
In another aspect, this close relationship is shown: if tourism development needs to have a good tourism environment (including the natural and human environment -these two factors inseparable). The natural environment such as no dirt, clean water, no writing on rocks, etc. the humanistic environment is a relic that is preserved, residents, employees working in the tourist area must have culture and quality complete culture, policy, legal system, etc.
Knowledge, social information, behavior, psychological understanding of tourists, etc. are effective drivers to promote tourism development.
In contrast to culture, tourism also plays a very important role in this relationship. Tourism becomes a means to convey and show the cultural values of a locality and people for all domestic and international tourists to explore, admire, learn, and enjoy.
Thanks to tourism, cultural exchanges between communities and countries are strengthened and expanded.
Tourism is also a means to awaken and revive the national cultural values that are submerged or fading overtime before historical events. These can be ancient architectural works, a living custom, a folk tune, a national dish, etc. showing the artistic, cultural, and technical level of the past times. Thanks to tourism, these cultural assets are restored, exploited, and embellished, serving the need to validate the values of that heritage.
Cultural Heritage Motivates Tourism Development
Cultural heritage is a tourist resource with a strong attraction and a driving force that attracts more and more domestic and international tourists to Vietnam. Currently, the tourism industry considers this as an important foundation and pillar for the development of the tourism economy in addition to the elements of infrastructure, specialized technical facilities, and human resources. Cultural heritage is also an active support tool in image positioning and branding for Vietnam tourism.
We have the right to be proud of the country's long history of 54 ethnic groups, which has left today a huge treasure of cultural heritage, extremely rich, diverse, and unique. . Along with that are tens of thousands of historical, cultural, and scenic heritage. Only the physical cultural heritages are estimated to have more than 3,000 national heritages and about 7,500 provincial-level heritages and many monuments are still being counted; a system of festivals, traditional craft villages; culinary culture of regions and ethnic groups; folk cultural heritages, etc.
On the basis of promoting the unique cultural heritage values of each type of heritage, in recent years, heritage tourism has grown strongly, the number of domestic and international visitors has been constantly participating increase, especially the heritage after being recognized by the State and honored by UNESCO. The attractiveness of the heritage has motivated tourism development, bringing many benefits in terms of income, employment, and local socio-economic development. Specifically, the Complex of Hue Ancient Monuments, in 2017 welcomed 3 million tourists, of which 1.8 million were international tourists, earned 320 billion VND from entrance tickets; Hoi An Ancient Town In particular, cultural heritage is also an important factor that makes a difference to the destination system and tourism products of Vietnam, connecting and diversifying trans-regional and international tourist routes.
Tourism Promotes Cultural Heritage Values
In the world, cultural tourism has long been and will forever be a basic school or product line of tourism. Especially for countries and territories with a cultural depth measured by a dense heritage system like our country, heritage tourism becomes one of the outstanding strengths. Today, heritage tourism aims to attract visitors to find source values, learn, interact, and experience to absorb cultural heritage values imbued with the identities of ethnic groups and ethnic groups. In our country, the policy of tourism development on the basis of preserving and promoting the fine traditional cultural heritages of the nation has been expressed in the Politburo's Resolution 08-NQ/TW on development tourism development becomes a spearhead economic sector. Cultural tourism is therefore a key product line of Vietnamese tourism, from visiting cultural-historical sites, systems of museums, cultural works, art activities, exploring, interact, experience local culture, festivals, lifestyle, enjoy regional cuisine and products, etc.
It can be said that tourism has promoted the protection of the country's cultural treasures. It is the needs of visitors to visit, learn, and experience that motivates the government and people to appreciate, be proud, take care of, preserve, restore, and clarify and promote the values precious capital of cultural heritage. Heritage-based tourism activities in many places such as Hue, Hoi An, Ha Long, etc. have become the basis and main driving force for livelihoods, the main industries of the people as well as the main economic sectors of the locality. Phuong. Heritage tourism creates both income and employment, both creating motivation and creating resources to preserve and promote heritage values; at the same time, actively support to improve the quality of life, increase understanding, respect for diversity and cross cultures, as a basis for forming a suitable code of conduct between people and tourists and heritage. The benefits of heritage tourism are not small and shared with businesses and people. Some of the revenue from heritage tourism is returned to be reinvested in heritage conservation, embellishment, honor, restoration, and management. In that sense, heritage tourism makes a great contribution to the conservation and sustainable promotion of cultural heritage.
However, in the current strong growing tourism trend, especially mass tourism has had and is having negative impacts on cultural heritage. Due to the sensitive and vulnerable nature of the heritage, the mass tourism movement lacks control in many places, especially the famous heritages in our country that are spreading many impacts aspects such as overexploitation of commercialization, overloading of guests, abuse of heritage, improper restoration, heritage renewal, etc. make the legacy quickly degrade, distort, and fade its value, etc. The consequences of uncontrolled and unsustainable heritage tourism development are threatening the integrity of the heritage. Over the past time, in some famous heritage, there have been development investment activities, including serious damage to the heritage that the next phase will pay very expensive prices for the restoration of diurnal value property has been compromised. On the other hand, tourism is too commercialized, cultural value boring; the risk of fading identity, disrupting local traditions and lifestyles; increasing community divisions, conflicts of interest, conflicts of right to access resources, including cultural heritage resources, etc. are raising alarm bells for stakeholders in sustainable management firmly cultural heritage resources in tourism development.
The Fundamental Goals of Sustainable Tourism Development
In order to develop cultural tourism and build sustainable development principles, it is necessary to first define the basic goals. That is: Economic efficiency: Ensuring economic efficiency and competitiveness so that businesses and tourist destinations can continue to prosper and achieve long-term profits.
Local development: Maximize the contribution of visitors to the prosperity of the local economy in tourist sites, tourist areas, include spending of tourists are kept locally.
Creating jobs and raising income levels: Increasing the quantity and quality of local jobs created by the tourism industry and supported by the tourism industry, without gender discrimination and other aspects.
Social equity: There is a need for a fair and generous redistribution of the economic and social benefits of tourism to all in the community that deserves it.
Meet the satisfaction of tourists: Provide safe, quality services to satisfy the needs of tourists.
Enhancing the functional role of the tourism organizer: Attracting and empowering local communities to plan and make tourism development and management decisions, in consultation with related parties.
Social security: Maintain and enhance the quality of life of local people, including social organizational structure and access to resources, life support systems, avoid degradation and environmental as well as social excessive falls in all its forms.
Preservation of cultural values: Respect and increase the value of historical heritages, national cultural identities, traditions, and special identities of local communities at tourist sites.
Protect nature: Maintain and improve the quality of landscapes, both in rural and urban areas, to avoid environmental degradation. Efficient use of resources: Minimize the use of rare and non-renewable resources in the development and deployment of tourism facilities, facilities, and services.
Protecting the environment: Minimize pollution of the air, water, soil, and waste from tourists and travel agencies.
The Principles of Sustainable Tourism Development
Tourism is an integrated economic industry, interdisciplinary, inter-regional, and highly socialized, wanting to develop sustainably requires a joint effort of the whole society. The goal of sustainable development is to bring harmony between socio-economic and environment but does not affect the future. In order to accomplish the above objectives, it is necessary to identify the principles of sustainable tourism development, taking as a guideline for the next activities, helping tourism to develop sustainably in the future.
Exploiting and using resources appropriately: Resources are the overall geographical location, natural resources, the national property system, human resources, policy lines, capital, and market, etc. both Foreign and foreign countries can be exploited for tourism development. The sustainable use and conservation of natural, socio-cultural natural resources are essential to ensure the long-term development, exploitation for tourism activities based on current needs calculation.
Minimize excessive consumption of natural resources: Consumption of natural resources at a sufficient level helps to restore natural resources, on the other hand, reduce waste to the environment. Natural resources should be planned and managed to avoid massive exploitation or hot development.
Maintaining conservation of the natural, social and human diversity: It is necessary to respect the diversity of nature, society, and environment of the destination, ensure the pace, scale, and type of tourism development, to protect the diversity of the local culture. Considering the size and capacity of each region, closely monitoring tourism activities for flora and fauna, integrating tourism activities into activities of the community, preventing the substitution of industries. The traditional profession is as long as in modern professions. Develop tourism in accordance with local culture, social welfare, development needs, ensure the scale and progress of different types of tourism in order to increase mutual understanding between tourists and people resident in, etc.
Tourism development must be placed in the socio-economic master plan: The long-term existence of the tourism industry must be within the strategic framework of the country, region, and locality in terms of socio-economic issues. To ensure development, the tourism industry needs to take into account the immediate needs of both residents and visitors, in the planning, it is necessary to unify socioeconomic, environmental aspects, respect the national strategy country, region, territory, and locality. Development of the tourism industry must be consistent with the locality, in accordance with the planning assigned by the locality, that development must be sustainable and long-term.
Tourism development must support the local economy: With its interdisciplinary characteristics, sustainable development is not only itself but also involves many other fields. In the tourism sector, support for other industries not only for businesses directly involved in tourism but also for many businesses indirectly participating in this activity, which in turn leads to business support local health.
Attracting local communities to participate in sustainable tourism development: Local community involvement is a guaranteed factor for sustainable tourism development. When local communities are involved in tourism development, tourism will be created many favorable conditions, because the participation of local communities will attach the rights and responsibilities of each resident to the overall development of travel.
Get opinions from the people and stakeholders: Consult stakeholders and residential communities, domestic and foreign organizations, non-governmental organizations, and government with ideas for the project. , is an important principle in sustainable tourism development. Sharing the interests of the parties is aimed at harmonizing the interests in the implementation process.
Focusing on the training of human resources: With sustainable tourism development, training, and developing human resources is an extremely necessary task. There is a huge shortage of labor force in the tourism sector and professionally trained workers that have not met the general needs of the industry. A skilled workforce that not only brings economic benefits to the industry but also improves the quality of tourism products.
Attaching importance to scientific research in the tourism industry: In order for tourism to become a professional, modern, and sustainable economic sector, scientific research plays an important role in building strategies, planning, planning, training, and implementing tourism development activities. The scientific and technological achievements in tourism in the past years have become important scientific foundations with high practical applications, contributing to the development of the tourism industry.
-Culture is a unique resource of tourism (the source of raw materials to form tourism activities). When we say culture is the raw material to form tourism activities, we mean the attraction/beneficiary of the tourist. Cultural materials have two basic types: Physical culture is human creations that exist, exist in a space that can be perceived by sight, touch, such as historical and cultural relic's culture, handicrafts, tools in seed production, production, ethnic dishes, etc. An intangible culture such as festivals, art forms, behavior, communication ... According to the conception of In the tourism industry, cultural elements are classified into human resources (as opposed to natural resources such as seas, rivers, lakes, mountains, caves, etc.), namely: Historical -cultural relics; souvenir goods of national characteristics; cuisine; festival; entertaining games; customs, practices, behavior, communication; religious beliefs; literature -art. Therefore, culture is the condition and environment for tourism to arise and develop. Along with natural resources, cultural resources are one of the typical conditions for tourism development of a country, region, and locality. The value of cultural heritage: historic sites, architectural works, art forms, customs, festivals, traditional professions, etc. together with economic, political, and communal achievements associations, cultural and art establishments, museums, etc. are objects for tourists to explore, enjoy, for tourism to exploit and use. The exploitation and profit from natural resources and the construction of tourist sites reflect the intelligence and creativity of mankind. It is these resources that not only create the environment and conditions for tourism to arise and develop, but also determine the scale, type, quality and efficiency of tourism activities of a country, a region, a locality.
The relationship between tourism and culture is also manifested through behavior, ethics in service, or in tourism business transactions. The essence of the relationship between culture and business in general and tourism in particular (or the role of culture in economic development) has been affirmed. In other words, business behavior to be successful must be done culturally. Can be called collectively as business art or business culture.
In another aspect, this close relationship is shown: if tourism development needs to have a good tourism environment (including the natural and human environment -these two factors inseparable). The natural environment such as no dirt, clean water, no writing on rocks, etc. the humanistic environment is a relic that is preserved, residents, employees working in the tourist area must have culture and quality complete culture, policy, legal system, etc.
Knowledge, social information, behavior, psychological understanding of tourists, etc. are effective drivers to promote tourism development.
In contrast to culture, tourism also plays a very important role in this relationship. Tourism becomes a means to convey and show the cultural values of a locality and people for all domestic and international tourists to explore, admire, learn, and enjoy.
Thanks to tourism, cultural exchanges between communities and countries are strengthened and expanded.
Tourism is also a means to awaken and revive the national cultural values that are submerged or fading overtime before historical events. These can be ancient architectural works, a living custom, a folk tune, a national dish, etc. showing the artistic, cultural, and technical level of the past times. Thanks to tourism, these cultural assets are restored, exploited, and embellished, serving the need to validate the values of that heritage.
From an economic point of view, thanks to tourism, it creates an income source that allows localities to accumulate and develop socio-economic; including culture. As a result, cultural assets will be protected, repaired, and embellished at the same time with the construction of new cultural facilities and the enrichment of contemporary cultural values. Because culture and tourism have such an interactive / mutual relationship, culture and tourism cannot be separated and even more irreversible.
Thus it is possible to confirm an argument: tourism is an integrated cultural activity or the connotation of tourism is culture and that culture is expressed either explicitly or implicitly throughout all aspects tourism activities. The main activities of tourism include food, accommodation, excursions, shopping, entertainment (internal human needs, etc.), in all those activities apart from serving meeting the essential life needs of all members of society with cultural characteristics and aspirations -expressing their admiration and pursuit for cultures of other places. Visitors can leave the rooms with high amenities to live in stilt houses, simple leaf houses can abandon modern means of transport to go on a canoe, ride a cyclo on the old streets, you can give up familiar dishes to enjoy "difficult to play" dishes, ready to spend a large amount of money to buy specialties of other countries, etc. The objects that visitors can see, eat, touch, and grasp are specific types of material, but they all contain some kind of spiritual culture that visitors go to see, buy, etc. The most important thing they choose is not the material itself but satisfying the psychological need to find the new, the strange, and the beauty (Luong, 2002). Therefore, although tourism is an economic industry in which economic activities are included, in general tourism is a cultural activity -a socio-cultural activity of humanity.
The Sustainable Tourism Development
In the current period, sustainable tourism development requires the development of high-quality tourism products, capable of attracting and meeting the increasing needs of tourists, but without hurting harming the natural environment and indigenous culture, and taking responsibility for conservation and development of natural resources and the environment. In this regard, in Agenda 21 on the tourism and travel industry towards the environmentally sustainable development of the World Tourism Organization and the World Council identified "Sustainable tourism products the products are built in accordance with the environment, communities and cultures, thereby bringing certain benefits rather than threats to tourism development" (Luong, 2002).
One of the most important focuses of sustainable tourism development today is working towards a balance between socio-economic goals and the conservation of natural resources, environment, and community culture in when it must enhance the satisfaction of the increasing and diversified needs of tourists. This balance can change over time as there are changes in social norms, the conditions that ensure the ecological environment, and the development of science and technology. However, an approach to ensuring sustainable tourism development must be based on a balance of environmental resources with a unified plan.
In Vietnam, the concept of sustainable tourism development is still relatively new. But through lessons learned and practices about sustainable tourism development in countries around the world, tourism development in our country is moving towards responsible for natural resources and the environment. Therefore, many new types of tourism have emerged in Vietnam: ecotourism, nature tourism, green tourism, etc.
Among the new types of tourism developed in Vietnam just above, ecotourism is considered as an important approach to sustainable tourism development. Therefore, in September 1999, the Vietnam National Administration of Tourism cooperated with the World Union for Conservation of Nature and the Asia-Pacific Economic and Social Commission to organize a workshop on developing tourism development strategies. Ecological calendar in Vietnam. At this workshop, the definition of ecotourism was given for the first time in Vietnam. Accordingly, "Ecotourism is a type of tourism based on nature and indigenous culture associated with environmental education, contributing to conservation and sustainable development efforts with the active participation of local community" (Luong, 2002). This result is considered a favorable prelude to the next steps in promoting the development of ecotourism in particular and sustainable tourism in general in Vietnam.
Although leading experts in tourism and other related fields in Vietnam have not really agreed on the concept of sustainable tourism development, so far the majority The comments all said: "Sustainable tourism development is the managed exploitation of natural and human values to satisfy the diverse needs of tourists; take care of the long-term economic benefits and ensure contributions to the conservation and embellishment of resources, to maintain cultural integrity, to environmental protection for developing tourism activities in the future, and contribute to improving the living standards of local communities" (Luong, 2002).
On the basis of consistent with the interpretation of the concept of "Sustainable tourism development" of the Vietnam Tourism Law (2005). At the same time, with thinking towards the goal-oriented approach of the term "sustainable development", the concept of "sustainable tourism development" is proposed by the author as follows: "Sustainable tourism development is the development of tourism activities with spending brings economic benefits, creates jobs for society and community; satisfy the diverse needs of all sectors participating in tourism ... on the basis of exploitation of natural resources; At the same time, to be conscious of investing in embellishing, conserving and maintaining the integrity of natural resources, ensuring a clean environment; must attach responsibility and interests of the community in the exploitation, use, and protection of natural resources and the environment.
Tourism Culture
"Tourism culture is not a simple addition between culture and tourism, but a combination of tourism and culture, a spiritual and material result due to the mutual interaction between three types: cultural and emotional needs of tourist subjects (tourists), cultural content and values of tourists (tourism resources can satisfy spiritual and material enjoyment of people), the cultural consciousness and qualities of travel brokers (guides, narrators, product designers, service staff, etc.) are produced: (Luong, 2002). Anyone of these three factors can not alone create a tourism culture. If you are separated from tourists, you will lose the target audience to enjoy and not fulfill cultural aspirations. Without a travel agent, the tourist subject and the tourist object cannot meet each other, tourism cannot be performed, without tourism, of course, tourism culture cannot arise. If there are no tourists, the tourism industry only has a reputation, it will not produce a new tourism culture, and even the inherent cultural and tourist components cannot be shown.
Thus, tourism culture means the cultural content expressed by tourism -is the culture accumulated and created by tourists and tourists in tourism activities. Tourism culture is born and develops with tourism activities.
The culture of the tourist subject is reflected in the process of enjoying tourism. Above all, it is expressed through a sense of the need for tourism because it clearly shows a certain cultural level and the social needs of many people. The concept of value, the form of thinking, aesthetics, personality, emotions, etc. will be revealed in tourism activities and it reflects national psychology. In addition, it is also expressed through travel behavior towards beauty, to cherish and cherish beauty. Unfortunately, many beaches, many landscapes are getting dirtier and dirtier because of the waste, not to mention the guestbook lines with all types on the cliffs, tree trunks, even engraved on the ancient beer, etc.
Tourists are the material base of tourism culture, these facilities provide both objects for tourists to visit and enjoy excursions, and only under the care of tourism can works.
The culture of the tourist is expressed through the values that tourism resources can provide to visitors, the values of sanitary aesthetics, the environment of the ability to improve physical and knowledge for tourists visitors, not to mention the very broad concept of values itself. For example, a tourism resource is a historical and cultural relic, the aesthetic value here is to respect the authenticity, the restoration, and embellishment of the relic deformation, losing the original beauty its head, in violation of the originality -historical authenticity of the monument, which can be considered a non-cultural act. This not only does not have the effect of attracting tourists but also to a certain extent harms the image of the tourist destination, the general cultural image of the country.
The culture in the tourist is also considered as a standard to define the quality of tourism products: The tourism industry includes both tourism services, tourist site management, direct customer contact, also includes the construction of tourist sites, sites, program design, and arrangement of service facilities. ... The most basic task is to bridge between the subject and the tourist in search of beauty and providing beauty. The culture shown in this agency is the tourism industry when designing tourism routes, building tourist sites, tourist establishments, services, etc. must create a cultural character. Must have the effect of improving the taste of visitors' life, making visitors feel peaceful, relaxed, enriching knowledge about nature, people and culture, and feel the beauty of the world natural, humanistic philosophy and indigenous culture.
It is necessary to ensure the rationality and optimization of investment in tourism infrastructure and equipment, but in addition to international practice, it must also have its peculiarities. According to the tourist routes and destinations that have been detailed planning, step by step building the system of roads, means of transport, accommodation establishments: hotels, restaurants, shopping places; means of communication, etc. according to international standards, the more modern, the more convenient, the easier it will be to attract customers. However, in addition to international practices, tourism also has facilities, infrastructure, and equipment bearing the national cultural identity that attracts tourists. For example, in the scenic spots, the landscape must keep the bumpy road meandering through the slopes, riverside, up to caves, pagodas for tourism.
Cannot or absolutely not concreted, bricked, completely petrified the winding, winding roads that are the "soul" of the tourist destination. Losing that soul, the value of tourism will be reduced and tourism quality will also decline. Or in tourist spots, which are ancient villages and towns, when planning and construction must ensure that they do not damage the space, preserve old roads, old houses, old bridges, markets, places of living. This point can only confirm its own unique and distinctive values of residents' community. Even in hotel and restaurant equipment, in addition to the international part, there must be an increase in the rate of unique facilities and infrastructure equipment such as architectural design, interior decoration, decorative patterns decorations, items, etc. made from traditional crafts such as embroidery, silk, pottery, stone, sedge, etc.
Ethnicity in architectural decoration:
The culture is also manifested by the behavioral attitude, wide understanding, accurate scientific habits of the travel agent, especially the product designer and especially the tour guide -the person who goes directly with the tourist/tourist subject during the tour, who is charged with finding beauty and providing it to the tourist.
In addition, tourism development must have a good tourism environment (including natural ecological environment and human social environment). The humane social environment includes the level of social development, intellectual level, and the standard of living, a sense of respect for the law, including the entire system of institutions, laws, mechanisms, and policies. A favorable human social environment, especially a clear legal environment consistent with international practices, will have a positive effect on encouraging tourism development.
Tourism is a cultural activity, but in the end, it is still a business activity so its products must also be cultural: In order to have a system of cultural tourism goods and products, it must be expressed in all details from the tourist route, tourist destination, tourist vehicle, and services, etc. in general, it must be built products meet two requirements: the distinctive and symbolic of the national culture.
Not any tourism product exploited from national culture is also unique, although the culture itself is specific to each country. Exploiting the elements of the national culture's identity and characteristics to form tourism products is to create unique and distinctive cultural products.
Tourism to ethnic minority areas is of interest to the world because there you will be able to observe and learn about unique, unique and unique customs, lifestyles and cultural values. Many countries around the world have ethnic minorities. However, in Vietnam, there are comparative advantages in developing tourism to ethnic minority areas. That advantage is reflected in the preservation of primitive features of ethnic cultures, in their lifestyles, customs, farming habits or in architecture, costumes, in cultural, artistic and vocational activities traditional crafts. Especially, those cultural features are mixed with the beautiful natural ecological space, which attracts tourists. In addition, the attraction of cultures of ethnic minorities in Vietnam is the diversity in the unity of national cultures. Thus, investment in developing tourism to ethnic minority areas is to create a unique and unique type of cultural tourism in Vietnam.
Every country has a system of urban centers, but when visitors come to Hanoi capital, they will surely find it interesting, even unexpected, when coming here, coming to the city, meeting the traditional folk "villages". In particular, the jobs that are both rare and ancient with "technology" and "technological process" and its special products -are the valuable strength and attractions for tourism traveler. In addition, most of the special craft villages like this, in its overall, harmonious form, are all "cultural and poetic villages" with landscapes -customs (the leading is festivals) rich and attractive. Tourism certainly finds the ideal point here: a unique cultural tourism product full of attractions.
Likewise, ecotourism is being interested and directed by the whole world. In many countries, the resources to create this product are very plentiful, but only in Vietnam can the rural agricultural ecosystem in the monsoon tropics is extremely diverse, unique with fields and fragments. gardens, fish ponds, plants, animals ... associated with it are methods of using and protecting land, water sources, plants and animals, farming methods, etc. living scenes of people means raw production and it is a unique source of materials for Vietnamese tourism to create a unique tourism product.
The cultural identity of a country, a locality is a foundation for creating iconic products that appeal to tourists. Obviously, it is impossible to create the iconic tourism products of Vietnam by copying or borrowing from the tourism products of Bangkok, Beijing, or Malaysia but from the typical cultural values of Vietnam. Because culture is the foundation of society, expressing the height and depth of national development, creating tourism products that represent the national culture plays an important role in identify the image of that country and of the tourism industry.
Tourism culture is a wide category, expressing the cultural values of the entire tourism activity. All activities of each department, tourism products in the process of creation are aimed at forming the unique characteristics of the national culture, which will help form a tourism culture country-specific calendars.
Thus, the whole harmonious relationship between tourists, tourists, travel brokers, tourism products, and institutions has created a part of tourism culture. Today, tourism culture has become a new element in the cultural category of each country.
Conclusion
The tourism development process that ensures the above problems are addressed will be assessed as sustainable. However, that development is only relative because in society there is always change and development, the sustainability of this factor may be the cause that affects the sustainability of other factors.
It can be said that tourism has promoted the protection of the country's cultural treasures. It is the needs of visitors to visit, learn, and experience that motivates the government and people to appreciate, be proud, take care of, preserve, restore, and clarify, and promote the values precious capital of cultural heritage. Heritage tourism creates both income and employment, both creating motivation and creating resources to preserve and promote heritage values; at the same time, actively support to improve the quality of life, enhance understanding, respect for diversity and cross cultures, as a basis for forming a suitable code of conduct between people and tourists and heritage. The benefits of heritage tourism are not small and are shared with businesses and people. Some of the revenue from heritage tourism is returned to be reinvested in heritage conservation, embellishment, honor, restoration, and management. In that sense, heritage tourism makes a great contribution to the conservation and sustainable promotion of cultural heritage.
However, in the current strong growth tourism trend, especially mass tourism has had and is having negative impacts on cultural heritage. Due to the sensitive and vulnerable nature of the heritage, the mass tourism movement lacks control in many places, especially the famous heritages in our country that are spreading a lot of impacts aspects such as overexploitation of commercialization, overloading of guests, abuse of heritage, improper restoration, heritage renewal, etc. make the legacy quickly degrade, distort, and fade its value, etc. The consequences of uncontrolled and unsustainable heritage tourism development are threatening the integrity of the heritage. Over the past time, in some famous heritage, there have been development investment activities, including serious harms to the heritage that the next phase will pay very expensive prices for the restoration of diurnal value property has been compromised. No society or economy can achieve absolute sustainability. All human activities, all measures are only aimed at ensuring the long-term exploitation of natural resources. | 2020-08-27T09:15:06.275Z | 2020-08-25T00:00:00.000 | {
"year": 2020,
"sha1": "24ce4b2c13e7a9e38081b34182564dce800170a4",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202008.0546/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "522b5678a43f1503a747d06879e0f556fc6da7ce",
"s2fieldsofstudy": [
"Economics",
"Sociology",
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
} |
249207935 | pes2o/s2orc | v3-fos-license | Ultrashort time-to-echo T2* and T2* relaxometry for evaluation of lumbar disc degeneration: a comparative study
Background To compare potential of ultrashort time-to-echo (UTE) T2* mapping and T2* values from T2*-weighted imaging for assessing lumbar intervertebral disc degeneration (IVDD),with Pfirrmann grading as a reference standard. Methods UTE-T2* and T2* values of 366 lumbar discs (L1/2-L5/S1) in 76 subjects were measured in 3 segmented regions: anterior annulus fibrosus, nucleus pulposus (NP), and posterior annulus fibrosus. Lumbar intervertebral discs were divided into 3 categories based on 5-level Pfirrmann grading: normal (Pfirrmann grade I),early disc degeneration (Pfirrmann grades II-III), and advanced disc degeneration (Pfirrmann grades IV-V). Regional differences between UTE-T2* and T2* relaxometry and correlation with degeneration were statistically analyzed. Results UTE-T2* and T2*value correlated negatively with Pfirrmann grades (P < 0.001). In NP, correlations with Pfirrmann grade were high with UTE-T2* values (r = − 0.733; P < 0.001) and moderate with T2* values (r = -0.654; P < 0.001). Diagnostic accuracy of detecting early IVDD was better with UTE-T2* mapping than T2* mapping (P < 0.05),with receiver operating characteristic analysis area under the curve of 0.715–0.876. Conclusions UTE-T2* relaxometry provides another promising magnetic resonance imaging sequence for quantitatively evaluate lumbar IVDD and was more accurate than T2*mapping in the earlier stage degenerative process.
Introduction
Low back pain (LBP) is a leading cause of disability worldwide, placing a great burden on the global health care system [1,2]. Intervertebral disc (IVD) degeneration (IVDD) is a significant contributor to nonspecific LBP, with a lifetime prevalence of over 80% [3,4].
Early stages of IVDD are mainly in the form of biochemical changes, including proteoglycan (PG) reduction, dehydration, and collagen degeneration. It will lead to a decrease in hydrostatic pressure, resulting in nucleus pulposus (NP) dehydration and loss of the structural and mechanical properties of the IVDs. In advanced stages of IVDD, along with loss of hydration and the subsequent drop in disc pressure, IVD height decreases under load [5][6][7][8][9]. These degenerative changes are accompanied by structural lesions, such as disc herniation, causing LBP, neurogenic claudication, and even cauda equina syndrome. At this stage, treatment strategy is limited to conservative treatment alone or surgical excision [10]. Early detection of alterations in IVDD is important for developing preventative strategies or reestablishing degenerated IVDs, such as gene therapy, stem cell therapy, and growth factor therapy [11][12][13].
Conventional magnetic resonance imaging (MRI) is widely used for morphologic, qualitative assessment of IVDD in the clinical workup. Lumbar IVDD is commonly scored using the Pfirrmann grading system, which is based on the assessment of structure and loss of the signal intensity on T2-weighted imaging (T2WI). This grading system provides a standardized and reliable assessment of MRI disc morphology, but cannot detect early degeneration of IVDs characterized by a loss of PG quantitatively [14,15].
Several quantitative MRI techniques to evaluate IVD degeneration objectively have been reported, such as diffusion-weighted imaging, diffusion tensor imaging, glycosaminoglycan chemical exchange saturation transfer, sodium, delayed gadolinium-enhanced MRI, T2/T2*, and T1rho mapping. Previous studies have demonstrated that T2* mapping could be quantitative imaging biomarkers for evaluating the biochemical state of the discs and correlating that with histology, water content, and degeneration [16].
Ultrashort time-to-echo (UTE) imaging as a novel MRI technique has the capacity to catch very short T2* signals (0.008 ~ 0. 50 ms) [17][18][19][20], It has been confirmed to be sensitive to changes in the deep tissue matrix and to subtle and even preclinical degeneration [21]. To date, UTE-T2* imaging has been reported to be a reliable tool for quantitative assessment of the biochemical changes of short T2 tissues, including tendon, cartilage, and ligament, etc. [20][21][22][23][24][25]. However, to the best of our knowledge, studies on UTE-T2* quantitative technique in evaluating IVDD are scarce. We hypothesized that quantitative UTE-T2* mapping is capable of revealing degenerative changes in the discs.
The present study aimed to assess whether lumbar IVDD can be evaluated using UTE-T2* mapping and to compare the potential of UTE-T2*and T2* values in the diagnosis of early IVDD.
Subjects
Ethics approval for this study was provided by the ethics commission of the Fudan University Affiliated Zhongshan Hospital Xiamen Branch. Written informed consent was obtained from all subjects. The inclusion criteria were patients with single or recurrent episodes of nonspecific LBP in the last 6 months and age ≥ 18 years. Exclusion criteria were contraindications for MRI and patients with other spine diseases, such as spinal infection, tumor, tuberculosis, and serious scoliosis.
Statistical analysis
Statistical analysis was conducted using SPSS 22.0 software (IBM, Armonk, NY) and Medcalc 20.022 (Mariakerke, Belgium). The Kruskal-Wallis test was performed to determine differences among the 5-level Pfirrmann grades. The differences between the two methods were expressed using the ± 95% confidence intervals (CIs) from the Bland-Altman analysis. Correlations of quantitative values with Pfirrmann grades were analyzed using Spearman's rank correlation.
Receiver operating characteristic (ROC) analysis was performed and area under the curve (AUC), sensitivity, specificities, positive likelihood ratio (+ LR), and negative likelihood ratio (− LR) were obtained to assess the diagnostic efficacy of each quantitative parameter for differentiating normal IVDs from early disc degeneration and to differentiate early disc degeneration from advanced disc degeneration. AUCs were compared using the DeLong method [30]. A P value less than 0.05 was considered statistically significant.
Using the Pfirrmann grading system, 73 discs were categorized as grade I; 110 discs, as grade II; 164 discs, as grade III; and 19 discs, as grade IV. The flowchart for the enrollment of the study population is presented in Fig. 2.
The distribution of the UTE-T2*and T2* values with respect to Pfirrmann grades is provided in Table 2.
Correlation of UTE-T2*and T2* values with Pfirrmann grades
The Kruskal-Wallis test demonstrated that all quantitative values for all segments were significantly different among different Pfirrmann grades ( Table 2). Bland-Altman plots are shown in Fig. 3. There was no significant bias between UTE-T2*and T2* values in NP and PAF(P > 0.05).UTE-T2* values showed high Comparing the Spearman correlation coefficient, the highest correlation value was seen in NP and the lowest was seen in AAF. Among those, the UTE-T2* value of NP showed the highest correlation values with Pfirrmann grades. Results of Spearman's correlation analysis are summarized in Fig. 4.
Post hoc multiple comparisons among each Pfirrmann grades
There were significant differences in UTE-T2* values of NP and PAF between each Pfirrmann grade. T2* values were found to be significantly different between Pfirrmann grade II and grade III in AAF, NP, and PAF (Fig. 5).
Diagnostic performance of UTE-T2* and T2* values in distinguishing each degeneration groups
ROC curves of UTE-T2* and T2* values for distinguishing each degeneration groups are plotted in Fig. 6 The corresponding diagnostic test characteristics are provided in Table 3. The AUC values of UTE-T2* mapping in AAF, NP, and PAF were 0.715,0.876,0.787, respectively, for identification of the early disc degeneration, and 0.726,0.893,0.804, respectively, for identification of the advanced disc degeneration. The AUC values of T2* mapping in AAF, NP, and PAF were 0.620, 0.763, 0.670, respectively, for identification of the early disc degeneration, and 0.570, 0.842, 0.720, respectively, for identification of the advanced disc degeneration. For pairwise comparisons of ROC curves, UTE-T2* values in NP and PAF were better in identifying early degeneration IVDs than those in T2*. There were no significant differences among UTE-T2* and T2* mapping in AAF. Comparing between different segments, diagnostic performance of NP was the highest in predicting the early degeneration IVDs, AAF and PAF performed similarly.
For differentiating early and advanced disc degeneration, the UTE-T2* value of PAF was better than that of T2* and there were no significant differences among UTE-T2* and T2* mapping in AAF and NP. Comparing between different segments, diagnostic performance of NP was better than AAF, AAF and PAF performed similarly in predicting the advanced disc degeneration.
Overall, the diagnostic efficacy of UTE T2* mapping was better than that of T2* mapping for evaluating shown for each variable. The mean score is plotted on the x-axis, while the difference between the two methods is plotted on the y-axis (mean difference ± 1.96 SD). AAF, anterior annulus fibrosus; NP, nucleus pulposus; PAF, posterior annulus fibrosus Fig. 4 Scatter plots of the values in AAF, NP, and PAF according to the Pfirrmann grades. a, c, and e are respectively UTE-T2* relaxation time of AAF, NP, and PAF correlated with disc degeneration grading; b, d, and f are respectively T2* value of AAF, NP, and PAF correlated with disc degeneration grading IVDD, especially in NP. The results of this study demonstrated that the UTE-T2* value in NP showed high correlations with Pfirrmann grade (r = − 0.733; P < 0.001) and that AUCs for the assessment of the early disc degeneration (AUC 0.876) were significantly higher than those for T2*(AUC 0.763).
Discussion
This study is the first to investigate and compare the diagnostic efficacies of UTE-T2* and T2* mapping in detecting IVDD in humans. The results may help to confirm the feasibility and specificity of UTE-T2* as an objective and quantitative tool to identify early degenerative changes of the disc and show promise for clinicians to modify the diagnostics and therapeutic management strategies more accurately.
Conventional MRI, such as the Pfirrmann scale with T2WI, was limited in detecting ultrastructural alterations of early IVDD. Early stages of disc degeneration include biochemical changes, such as a loss or reduction of PG content, which can ultimately lead to dehydration. T2 relaxation reflected the integrated environment of the IVD, including water, protein, collagen, and other solutes [31], and was sensitive to water content and the composition of the collagen network structure. Researchers have reported that T2* relaxation time showed a good correlation with PG and collagen contents in 18 humans cadaveric IVDs [32]. Our findings confirmed an increase in the quantitative T2* values between the AAF and NP and a decrease between the NP and the PAF. In line with an earlier report [33,34],T2 and T2* mapping provided roughly similar results. The inverse correlation of the T2 relaxation time in the disc with Pfirrmann grade has been reported by Welsch et al. [35] and Noebauer et al. [36], the early study reported a low-to-moderate correlation between Pfirrmann grades and T2 relaxation times, which were consistent with the Spearman's correlation coefficient between Pfirrmann grades and T2*value in our results. Both T2 and T2* mapping differ in the biochemical sensitivity of disc tissue, with T2 mapping being sensitive to tissue hydration, while T2* mapping being more sensitive to changes in tissue integrity [35]. T2* mapping provide more valuable biochemical information on the IVDs ultrastructure, together with threedimensional acquisition capability and higher spatial resolution in a short scan time [37].
UTE-T2* mapping was acquired using different echo times in the short (1-10 ms) and ultrashort echo time range [19,22]. Because UTE-T2* mapping can catch the short T2* relaxations from tissues, it is more sensitive to biochemical collagen matrix changes compared with conventional MRI techniques, based on histologic standards [19].
Multiple large general-population-based studies have proved that UTE-T2* mapping can detect cartilage subsurface matrix changes, which can be indicative of reduced cartilage health from injury or early degeneration noninvasively [20,23,25]. Similar to the results reported by Detiger et al. [16], we observed a trend of decreasing UTE-T2* value with increasing degree of degeneration. The previous study revealed a significant correlation between T2* relaxation time and glycosaminoglycans (GAG) content in the nucleus pulposus, as well as histologic scoring with varying grades of degeneration [16]. During the aging process, the quantity and quality of PG and collagen contents diminish, along with a decrease in short T2* signal accordingly [21,25,34,38]. T2* relaxometry appeared to be sensitive to water and PG contents. This may be the initial step in the degenerative cycle [25,34,38], which could be the underlying reasons for the decreased UTE-T2* and T2* value.
In NP and PAF, UTE-T2* mapping showed significantly higher diagnostic accuracy in differentiating early disc degeneration from normal than did T2*. Theoretically, both UTE-T2* and T2* mapping measured the T2* value of the tissue. UTE MRI technique mitigates the rapid signal loss from short T2* by reducing the TE to the scale of 0-200 microseconds to sample the free induction decay as early as possible. With considerably shorter TEs (0.032 ms in this study) than T2*, UTE-T2*mapping allows signals from very short T2 components to be detected [23]. Thus, UTE-T2*mapping is less sensitive to the magic angle effect and more sensitive to water protons and their local environment, making it a satisfactory method for evaluating disc generation. Because T2* relaxation time has been reported to reflect both the water content and PG content reduction [16], it is not hard to understand why the UTE-T2* value has better diagnostic accuracy than T2* for differentiating early disc degeneration.
Previous studies have reported that T2 relaxation time of Pfirrmann grades IV is significantly shorter than Pfirrmann grades III, and no significant differences were found between Pfirrmann grades IV and V, both of which show extremely low signal intensity [39]. The results of our study showed that compared to T2* values, UTE-T2* values conveyed significantly higher diagnostic performance in distinguishing the early disc degeneration from the advanced in PAF. Takashima et al. [40] reported that short T2* relaxation times with UTE are promising for assessing progressive IVD degeneration with poor water content, such as fibrosis change of IVDD with short T2 relaxation time. Our results are consistent with those findings quantitatively. Takashima et al. did not further discuss the quantitative evaluation of the early disc degeneration because their study population did not include grade I IVDs. Our results would appear to complement and refine their research.
A previous report on healthy ovine IVDs demonstrated that the T2 values show regional variation in discs and reported that high T2 values were observed in NP and low T2 values in the AAF and PAF when histologically evaluated [41]. Disc degeneration is believed to originates in NP with depletion of GAG, followed by a reduction in water content [5,42]. Similar to previous reports, our study showed that correlations with Pfirrmann grade and UTE-T2* and T2* values were highest in NP and lowest in AAF. Our results also showed that NP had the highest diagnostic accuracy in predicting the early degeneration IVDs, while AFP and PFP were similar in predicting the early degeneration IVDs. These results suggest that the destruction of hydrophilic GAGs within NP was the main cause of the accumulation of cleaved extracellular matrix fragments with disc aging [16].
There were some limitations in this study. First, our study had no detailed histologic confirmation associated with IVDD changes. This is hard to achieve in humans. In addition, the relationship between T2* values and biochemical changes in IVDD has been previously established in human cadaveric lumbar discs [32]. Second, we were unable to compare the related clinical symptoms with the degree of degeneration in MRI quantitative parameters. Future research is warranted to explore these ideas. Third, no patient in this study showed grade V IVDs because grade V IVDs tend to have a collapsed disc space or a vacuum phenomenon, which is unable to measure the quantitative values due to susceptibility artifacts. If the complete grade V IVDs data set were available, quantitative evaluation of advanced IVD degeneration could be closer to reality. However, as our principal purpose was to detect early disc degeneration, the impact of incomplete grade V IVDs dataset on our results is within acceptable limits.
Conclusions
We demonstrated that UTE-T2*mapping was more accurate than T2* mapping in quantitatively diagnosing early intervertebral disc degeneration. In particular, UTE-T2* mapping allowed for precisely distinguishing disc degeneration, potentially providing a promising imaging biomarker with potential applications in intervertebral disc degeneration for the emerging cell-based therapies. | 2022-06-01T13:46:23.340Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "6a67847f92ce68e2d0daeb875496e6aa3570587f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6a67847f92ce68e2d0daeb875496e6aa3570587f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15364026 | pes2o/s2orc | v3-fos-license | Pleistocene refugia and genetic diversity patterns in West Africa: Insights from the liana Chasmanthera dependens (Menispermaceae)
Processes shaping the African Guineo-Congolian rain forest, especially in the West African part, are not well understood. Recent molecular studies, based mainly on forest tree species, confirmed the previously proposed division of the western African Guineo-Congolian rain forest into Upper Guinea (UG) and Lower Guinea (LG) separated by the Dahomey Gap (DG). Here we studied nine populations in the area of the DG and the borders of LG and UG of the widespread liana species, Chasmanthera dependens (Menispermaceae) by amplified fragment length polymorphism (AFLP), a chloroplast DNA sequence marker, and modelled the distribution based on current as well as paleoclimatic data (Holocene Climate Optimum, ca. 6 kyr BP and Last Glacial Maximum, ca. 22 kyr BP). Current population genetic structure and geographical pattern of cpDNA was related to present as well as historical modelled distributions. Results from this study show that past historical factors played an important role in shaping the distribution of C. dependens across West Africa. The Cameroon Volcanic Line seems to represent a barrier for gene flow in the present as well as in the past. Distribution modelling proposed refugia in the Dahomey Gap, supported also by higher genetic diversity. This is in contrast with the phylogeographic patterns observed in several rainforest tree species and could be explained by either diverging or more relaxed ecological requirements of this liana species.
Introduction
The African Guineo-Congolian rain forest is the second largest block of rain forest on Earth with about 6400 endemic plant species [1], and considered a biodiversity hotspot [2]. Repeated fragmentation of the tropical forest was suggested due to climate oscillations for the last one PLOS [3,4]. Based on White's chorological analyses [5], the African Guineo-Congolian rain forest can be divided into three phytogeographic units: Upper Guinea (UG), Lower Guinea (LG) and Congolia. All three units are characterized by considerable historical contractions, shifts and/or expansions [4]. Thus, current ranges of species or particular lineages are defined by the location of their refugia during the Last Glacial Maximum (LGM) as well as by postglacial migration routes [6]. Phylogenetic and population genetic studies provide valuable data to test the forest refuge theory as well as infer the location of refugia in Africa [7]. These studies have been particularly insightful for tree species due to their longevity, high reproductive output, but low speciation rates [8]. Comparative phylogeographic analyses of trees from LG and Congolia revealed a partial congruence of phylogeographic patterns with LGM forest refugia proposed by Maley [9][10][11][12][13][14][15]. Interestingly, phylogeographic patterns congruent with those of tree species were also found for Marantaceae herbs and lianas in this region [16].
The split between UG and LG rain forest is mainly constituted by a savanna corridor in Benin, Togo and eastern Ghana, also referred to as the Dahomey Gap (DG), and is caused by current rainfall gradients [14,[17][18][19]. Nevertheless, the two forest blocks were probably for the last time connected during the Holocene Humid Period (ca. 6-9 thousand years before present (kyr BP) [20] and several rain forest plant species are still present in the DG but scattered in microhabitats. It is therefore worthy of note that the forest species in the DG may either originate due to recent migrations from the main forest blocks (UG, LG), or constitute a remnant of the last period of rain forest connection. Interestingly, a recent phylogeographic study on the tree species Distemonanthus benthamianus (Fabaceae) [19] indicated that the history of the DG populations are consistent with paleo-vegetation data suggesting that the forest flora of the DG might be a relic of the early Holocene period when the Guineo-Congolian forest reached its maximum geographical distribution.
Lianas (woody vines) are non-self-supporting plants that use the architecture of trees to ascend to the forest canopy [21]. They play an important role in forest dynamics accomplishing various key indicator properties (i.e. gap-phase dynamics, transpiration and carbon sequestration). Lianas are particularly abundant and diverse in lowland tropical forests, where they constitute up to 40% of the woody biomass and more than 25% of the woody species [21], and contribute substantially to the forest leaf area [22,23]. Interestingly, Martin et al. [24] report that they are more prevalent in areas of secondary forest succession and are often able to compete effectively against tree and shrub species under acute and chronic disturbance. For lianas, it is expected that their genotypic diversity, in comparison to trees, mirrors younger historical events due to presumably shorter life cycles [25,26] and that the current genetic patterns might be more structured due to smaller dispersal distances in the tropical understory [27].
Chasmanthera dependens is a dioecious forest liana of the family Menispermaceae. It is widely distributed from Sierra Leone eastwards to Eritrea and Somalia, and southwards through eastern DR Congo and Tanzania to Angola and Zambia [28]. Chasmanthera dependens occurs in dense evergreen and semi-deciduous humid forest, in gallery forest, in termite mound thickets, thalwegs, humid secondary forest and bush fallow, at low to medium altitudes (up to 1500 m). It has a preference for well drained soils in localities with good availability of water and light [29,30]. The species is widely used in traditional medicine due to its contents of bitterns and alkaloids [30][31][32].
In this study, we sampled populations of C. dependens from the area of the DG and the borders of LG and UG, genotyped them with amplified fragment length polymorphism (AFLP), employed a chloroplast (cp) DNA sequence marker, and modelled the distribution based on current as well as paleoclimatic data (Holocene Climate Optimum, HCO, ca. 6 kyr BP and Last Glacial Maximum, LGM, ca. 21 kyr BP) in order to answer the following questions: 1. Was the distribution of C. dependens across West Africa influenced by past climatic changes (Pleistocene)? Which areas are indicated as LGM refugia using distributional models based on past climatic scenarios?
2. Which areas could be considered as LGM refugia based on patterns of genetic diversity? Are the patterns recovered by nuclear (AFLP) and chloroplast markers congruent and correspond to the postulated refugia indicated by distribution models?
3. Did the Dahomey Gap impact the present distribution of genetic diversity in this species? Is it possible to identify two diverging gene pools corresponding to refugia in UG and LG or is the genetic diversity distributed continuously?
4. Are the phylogeographic patterns of a liana congruent with generally postulated patterns for tree species?
Plant material
Fresh leaf tissue of C. dependens was collected from five West African countries (Benin, Cameroon, Ghana, Nigeria and Togo) covering the area of the eastern UG, western LG as well as the DG. In total, 139 individuals representing nine populations were investigated with 7-39 individuals per population (Table 1, S1 Table). Samples collected within a 50 km radius were considered a population. At least one herbarium specimen was prepared from each locality. Herbarium specimens were deposited at the Herbarium Senckenbergianum (FR) as well as at the University of Lagos Herbarium (LUH). The coordinates for the field-collected material were obtained using a handheld GPS unit, and for all kinds of the geographical presentation, ArcView-ArcGIS v10.1 (ESRI, USA) was used.
DNA extraction, PCR amplification and sequencing
Total genomic DNA was extracted from silica gel-dried leaf tissue. Extraction of total genomic DNA followed the CTAB procedure of Doyle and Doyle [33], with the following modifications: 700 μl of CTAB buffer were used for initial incubation, 500 μl of isopropanol were used for DNA precipitation, with two subsequent washing steps using 100 μl of 70% ethanol each. Finally, DNA was dissolved in 200 μl 1 × TE including 2μl RNase (10 mgÁml -1 ). Alternatively, DNA was extracted with the QiagenDNeasy1 Plant Mini Kit (Hilden, Germany) or the NucleoSpin Plant II Kit (Macherey-Nagel, Düren, Germany) from leaf fragments of The cpDNA trnH-psbA intergenic spacer was amplified using the primers trnH(gug) 5'-CGC GCA TGG TGG ATT CAC AAT CC-3' and psbA 5'-GTT ATG CAT GAA CGT AAT GCT C-3' [34]. The reaction mix of 25 μl contained 21.9 μl 1.1 × ReddyMix TM PCR Master Mix (ThermoFisher Scientific, Waltham, USA), 0.5 μl bovine serum albumin (10 mgÁml -1 ) (New England BioLabs, Ipswich, USA), 1 μL dimethyl sulfoxide (DMSO; Carl Roth, Karlsruhe, Germany), 1 μl of template DNA, and 0.3 μL of each primer (10 μM). PCR reactions were performed on a Mastercycler1 pro (Eppendorf, Hamburg, Germany), with initial denaturation of 2 min at 95˚C, followed by 35 cycles of denaturation at 95˚C for 1 min, annealing at 53˚C for 1 min and extension at 72˚C for 1 min, followed by a final extension step for 10 min at 72˚C. PCR products were cleaned using the NucleoSpin1 Extract II Kit (Macherey-Nagel, Düren, Germany), or the QIAquick1 Gel Extraction Kit (Qiagen, Hilden, Germany). Sequencing was accomplished for both strands using 3730 DNA analyzer (Applied Biosystems, Foster City, USA) by the laboratory centre of the Senckenberg Biodiversity and Climate Research Centre (BiK-F) with the primers used for PCR. Sequences were manually edited for bad quality bases and assembled in contigs using Geneious Pro v5.6.6 (Biomatters, Auckland, New Zealand). Sequences were aligned using the pairwise alignment algorithm implemented in Geneious Pro and the alignment was manually refined.
Amplified fragment length polymorphism (AFLP) analysis
For a subset of 54 individuals plus 22 duplicate samples, AFLP analysis was performed using the protocol established by Vos et al. [35], with minor modifications: Approximately 300 ng of DNA was digested and ligated in a 15 μl reaction mix containing 1 × T4-ligase buffer and 1 × ATP solution (Bioline, London, UK), 50 mM NaCl, 0.75 μg BSA, 1.5 U T4-ligase (Bioline), 1 U MseI and 5 U EcoRI (New England Biolabs), and 0.37μM of EcoRI-adapter and 3.67 μM of MseI-adapter. The reaction mix was incubated at 37˚C for 3 h, followed by an inactivation step at 65˚C for 10 min. The restriction-ligation product was subsequently diluted ten-fold. For the pre-selective PCR reaction, 2.5 μl of the diluted restriction-ligation product were used in a total reaction volume of 12.5 μl which contained 10 × PCR buffer II (Applied Biosystems), 2 mM MgCl 2 , 0.8 mM dNTP mix, 0.2 μM EcoRI-A primer (5'-GACTGCGTACCAATTCA-A-3'), 0.2 μM MseI-C primer (5'-GATGAGTCCTGAGTAAC-C-3') and 0.25 U AmpliTaq polymerase (Applied Biosystems). The reactions were held at 72˚C for 2 min, followed by 20 cycles of 94˚C for 20 s, 56˚C for 30 s, and 72˚C for 2 min, with a final 30 s extension at 60˚C, and were subsequently diluted ten-fold. For selective PCR, 2.5 μl of the diluted pre-selective PCR product were used as a template in a total reaction volume of 12.5 μl. The PCR master mix contained 1 × GoldTaq buffer (Applied Biosystems), 2.5 mM MgCl 2 , 0.8mM dNTP mix, 0.2μM Mse primer, 0.08μM EcoRI fluorescence-labeled primer (EcoRI-ACG (NED)/MseI-CTC, EcoRI-AAG(6-FAM)/MseI-CTA, EcoRI-AGC(VIC)/MseI-CTG,EcoRI-AGG(NED)/MseI-CAT; EcoRI-AAC(6-FAM)/MseI-CAG and EcoRI-ACC(VIC)/MseI-CTC), and 0.5 U Ampli-Taq Gold (Applied Biosystems). The reactions were held at 95˚C for 5 min, followed by 13 cycles at 94˚C for 30 s, a touch down cycle of 65˚C to 56˚C (-0.7˚C per cycle) for 1 min and 72˚C for 1 min, followed by another 23 cycles at 94˚C for 30 s, 56˚C for 1 min and 72˚C for 1.5 min, with a final 8 min extension at 72˚C. Differentially fluorescence-labeled PCR products and GS600 LIZ size standards (Applied Biosystems) were multiplexed, and fragments were separated on a 3730 DNA Analyzer (Applied Biosystems). In each run, a total of 96 samples were analyzed, including one negative control and several other repeats (altogether 37%), as recommended by Bonin et al. [36]. Raw data were visualized and the fragments manually scored using GeneMarker v1.97 (Soft Genetics, State College, USA). Processed data were exported as a presence/absence matrix.
Data analyses
Indels in the cpDNA sequences were manually coded for presence and absence using the approach described by Simmons and Ochoterena [37], and treated as single polymorphic sites. A statistical parsimony network among cpDNA haplotypes was reconstructed using TCS v1.2 [38] with a default connection limit of 95%. Haplotypes were then plotted as pie charts on the map of West Africa using the compiled site co-ordinates to show the distribution of haplotypes. Haplotype diversity (h) [39] and nucleotide diversity (π) [40] of populations were calculated using MEGA v5 [41] and DnaSP v5.10.1 [42].
For the AFLP dataset several statistical parameters such as total number of fragments, proportion of polymorphic fragments, number of private fragments, and Nei's gene diversity for the whole sampling as well as for particular populations [40] were computed using the R-script AFLPdat [43]. Main trends in genetic variation among individual genotypes were visualized by principal coordinate analysis based on Jaccard distances (PCoA) calculated using PASTv2.7 [44].
For both, cpDNA and AFLP datasets, F-statistics, AMOVA and Mantel tests (based on pairwise population F ST matrix) were calculated in Arlequin v3.1 [45], and the significance value tested using a nonparametric permutation test following the method of Excoffier et al. [46].
Distribution modelling
In order to investigate a relation of current genetic patterns to past processes, which might have shaped them, the potential distribution of Chasmanthera dependens was modelled using current and past climatic data. Occurrence records were compiled from several databases, including GBIF [47], the African Plant Database [48], and a record from Gnoumou et al. [49]. Doubles and doubtful records were removed (S2 Table), leaving a total of 131 georeferenced distribution points. Bioclimatic grids at a spatial resolution of 10' were downloaded for the present as well as the LGM (ca. 22 kyr BP) and the HCO (ca. 6 kyr BP) from the WorldClim v1.4 database [50] and clipped to an extent covering tropical Africa. For projections into the past, we used WorldClim's paleoclimate layers for CCSM4 and MPI-ESM-P global climate models. LGM and HCO were the periods in which the climate changed most abruptly in the recent past and the patterns recovered by the models could help us to trace footprints in the genetic variation. Highly correlated variables (absolute correlation coefficients higher than 0.8, S3 Table) and variables with implausible discontinuities were removed, leaving a set of six variables, that were used in Maxent v3.3.3 [51] for distribution models of C. dependens during the present and the LGM (Bio1 = Annual Mean Temperature, Bio6 = Min Temperature of Coldest Month, Bio7 = Temperature Annual Range, Bio12 = Annual Precipitation, Bio14 = Precipitation of Driest Month, Bio15 = Precipitation Seasonality). We removed duplicate records, reserved 25% of the occurrence points for testing, chose a number of 10,000 random background points (i.e. pseudoabsences), disabled hinge and threshold features in Maxent and used the median out of 100 model runs. For evaluation of the distribution models, we used the AUC (area under the model's receiver-operator-characteristic curve) [52].
Chloroplast DNA data and haplotype distribution
The cpDNA sequences were obtained for 139 individuals (Electronic Appendix 1). The length of the analyzed trnH-psbA fragments ranged from 244 to 256 bp. Nine nucleotide substitutions, one indel and two repeated sequence motifs were detected. The length of the alignment was 256 bp. After manual coding of the indels and removal of the repeated sequence motifs, the total length of the alignment was reduced to 244 bp, and 10 parsimony-informative sites were considered. Newly generated sequences were deposited in the GenBank (KX863354-KX863492, www.ncbi.nlm.nih.gov/genbank/). Seven haplotypes were identified, and the unrooted statistical parsimony haplotype network revealed three informal groups of haplotypes (Fig 1), separated from each other by four to five mutations. The first group consisted of four haplotypes (H1-H4), the second group of one haplotype (H5), and the third group of two haplotypes (H6-H7). Haplotypes H6 and H7 were exclusive to Cameroon populations, H4 was found only in Nigerian population NG02, and H1 only in Benin. In contrast, haplotype H2 was distributed in Nigeria, Benin, Togo and Ghana, and H3 was found throughout the whole studied area. Haplotype and nucleotide diversities of the populations and broader geographical units are summarized in Table 2. Fstatistics and AMOVA results are summarized in Table 3. The highest values for haplotype and nucleotide diversities were recorded in populations from Cameroon (CMR02) and Togo (TG01).
AFLP data analyses
After removing fragments with an error rate of more than 15%, 374 clearly scorable fragments sized from 100 to 591bp were considered for further analyses, out of which 89.01% were polymorphic ( Table 2, S4 Table). The repeatability (technical difference rate) [36] of replicated individuals was 89.83-97.16% (mean 93.65%). Two dimensional PCoA based on Jaccard distances separated populations from Nigeria, Benin, Togo and Ghana from Cameroon populations (Fig 2A). The separation also strongly reflected the division suggested by the haplotype network ( Fig 2B). However, only 16.6% of the overall variation was explained by the first two axes.
The cpDNA and AFLP datasets revealed strikingly contrasting results, suggesting high population differentiation considering the cpDNA data (F ST = 0.797), and very low population differentiation regarding the AFLP data (F ST = 0.064) ( Table 3). Mantel tests of both datasets proposed a weak (cpDNA) to strong (AFLP) correlation between matrices of genetic and geographic distances of populations (r M = 0.373, p = 0.015, r M = 0.623, p = 0.037, respectively).
Species distribution modelling
Present models reflect well the distribution known from occurrence records and literature ( Fig 3A and 3B), apart from the localities in Tanzania, Zambia and Malawi. All single model runs had test AUC values above 0.7 with an average of 0.83. The bioclimatic variable with the highest contribution to the models was the minimum temperature of the coldest month (Bio6, 77.8%), followed by the temperature annual range (Bio7, 16.2%) and annual mean temperature (Bio1, 3.3%). The annual precipitation (Bio12, 2.7%) had the smallest contribution.
Beyond the known distributions, high probabilities of occurrence were also predicted for coastal Kenya and Tanzania. Distribution ranges for present, HCO and LGM, using both climate models, consistently showed a gap in the area of the Cameroon Volcanic Line (CVL), including Mt. Cameroon and the Bamenda Highlands (as well as westwards towards the Niger Delta). Furthermore, during the LGM the distribution range seems to have been much more fragmented in West Central Africa and the East African Rift zone than both nowadays or during the HCO. Interestingly, high distribution probabilities during the LGM were assigned to the coastal areas of Ghana, Togo and Benin, also referred to as the Dahomey Gap.
Discussion
Geographic patterns of genetic diversity and differentiation of the African liana Chasmanthera dependens were investigated in this study in order to assess phylogeographic processes in West Africa using a descriptive genetic and distribution modelling approach. Particular focus of the modelling approach was given to populations representing the UG and LG phytogeographical units, and processes possibly accounting for observed patterns are discussed. For the distribution models climate grids at 1 km resolution were used, which are considered well-suited to account for the subcontinental extent of the study area and the objective of modelling past distributions. Details on the extent of microhabitat patches with possibly diverging microclimate were therefore not considered, which may lead to overestimations in the drier parts of the species range.
Nuclear and cpDNA genetic differentiation AFLP data for Chasmanthera dependens populations showed very low levels of genetic differentiation among the populations (F ST = 0.064). Low genetic differentiation and high gene flow between populations can result from long-distance gene dispersal either by pollen or by seed [53]. However, significantly higher chloroplast genetic differentiation (F ST = 0.797) suggests much higher pollen-mediated gene flow than gene flow by seed dispersal. In tropical woody plants, pollen-mediated gene flow is thought to be more extensive than gene flow by seed [54,55]. Animal-dispersed pollen can move over several kilometers in a continuous tropical forest [56] and wind-dispersed pollen probably over much longer distances. Chasmanthera dependens is a dioecious species with small greenish-yellow male flowers and small brownish female flowers in pseudo-racemose inflorescences and relatively large fleshy seeds. Hence, higher pollen-mediated gene flow in C. dependens could be explained by occasional wind pollination over long distances. On the other hand, fleshy seeds might also be considered an efficient strategy for moving seeds over certain distances [57], most probably by birds [58]. Nevertheless, pollination and seed dispersal agents of C. dependens as well as other climbers are still insufficiently documented [59]. In dioecious taxa, gender distribution and sex ratio are also strongly influencing gene flow [60]. Outcrossing mating system results in reduced population differentiation as reflected by the largely nuclear AFLPs, but bi-parental inbreeding also remains a possibility [61]. Moreover, small number of individuals of one sex can significantly reduce effective population size [62]. Hence, stochastic neutral processes and genetic drift can certainly contribute to population differentiation as reflected by the cp DNA data, considering also low population densities and patchy distribution pattern (A.C. Ilohpersonal observation).
Genetic divergence related to past climate fluctuations
Current geographical patterns of genetic diversity provide useful insights into species' histories [63,64], in particular if the current observations are combined with distribution modelling based on past climatic conditions. In our study, one haplotype was recovered throughout the whole studied area (H3, Fig 1). The presence of one haplotype suggests either past continuous distribution throughout the area or could be the result of dispersal events. As chloroplast haplotypes represent the seed parent [65], and our genetic data suggest that seed dispersal is limited, we consider past continuous distribution more likely. Distribution modelling suggested several gaps in the distribution within the study area, including the CVL, for at least 22 kyr. We therefore assume that haplotype H3 might represent a widespread ancestral haplotype spread throughout the distribution range in the moistest phase of the Eemian Interglacial period (125-120 kyr BP), the last period of continuous rainforest before the LGM, or even sooner [66]. Apart from haplotype H3, we identified two gene pools using both types of molecular markers (Figs 1 and 2) with a significant geographical pattern (Mantel tests). This pattern, however, does not correspond to the division of the proposed phytogeographic units (UG, LG), even though UG is under-represented in our sampling. Chloroplast markers revealed a distinct position of the Cameroon populations, having a set of unique haplotypes (H6, H7) and simultaneously having one of the highest nucleotide and haplotype diversities ( Table 2). The differentiation of the Cameroonian populations in the cpDNA was also reflected in the AFLP analysis (Fig 2). The remainder of the West African populations could be considered a second gene pool constituted mainly by haplotypes H1-H5. The Cameroon Volcanic Line (CVL) seems to represent a barrier between these gene pools, both today and in the past (Fig 3). Hence, we did not recover a gene pool differentiation corresponding to UG and LG, as observed in the legume tree species Distemonanthus benthamianus [19], but rather between Cameroon and the remainder of the West African populations. A specific genepool in the area of the DG in comparison to populations from Cameroon was also recovered in the rainforest tree Symphonia globulifera (Clusiaceae) [67] as well as in the dioecious tree Milicia excelsa Genetic diversity of Chasmanthera dependens (Menispermaceae) in West Africa (Moraceae) [12]. However, due to lack of sampling no relation to populations from Nigeria was elucidated. Contrariwise, one continuous genepool of the gallery forest legume tree Erythrophleum suaveolens (Fabaceae) was recovered throughout the UG and DG, reaching up to the CVL [68].
On the one hand, this finding supports the presence of refugia in Cameroon, which has also been previously suggested based on high genetic diversity documented in several tree species [10] and is also mirrored by higher probabilities using paleodistribution modelling (Fig 3C-3F). On the other hand, we observed a certain west-east gradient in haplotype and nucleotide diversity in the second gene pool for populations from Ghana, Togo, Benin and Nigeria, revealing populations from Togo and Benin (TG01, BN01) as the genetically most diverse ones. Interestingly, Togo and Benin are representing the areas of dry vegetation (i.e., the DG), separating UG and LG, and higher haplotype diversity and uniform gene flow across the DG (haplotype H2, H3; Fig 2) is rather surprising. In order to explain this pattern, several scenarios could be assumed: 1) high haplotype diversity and haplotype endemism indicate a refugium at the locality or close by; 2) the locality might have been colonized from different refugia; or 3) the high diversity is a result of recent dispersal events. Dispersal events can be considered less likely due to low seed dispersal suggested by the comparison of cpDNA and AFLP markers (see discussion above). For the differentiation between the first two scenarios, distribution modelling and the presence of derived endemic haplotype H1 can provide valuable insights, even though our data provide only limited resolution and drier parts of the species range might be overestimated. It is remarkable that predicted distribution areas with highest probabilities in the models under LGM paleoclimatic scenarios are localized in the area of the DG (Fig 3E and 3F) from where C. dependens expanded during the HCO (Fig 3C and 3D). This implies that currently observed high diversity in the area might be very likely an outcome of LGM climatic fluctuations, and high haplotype and nucleotide diversity of the population in Togo (TG01, Table 2) and the presence of the haplotype H1 in the population BN01 (Fig 1) might reflect the presence of a LGM refugium as suggested by paleodistribution modelling (Fig 3). Alternatively, refugia might have been located further east in the UG phylogeographic unit, and BN01 and TG01 represent a melting pot of widely distributed haplotypes, which unfortunately cannot be tested with our sampling.
Chasmanthera dependens nowadays seems to be associated with dense evergreen and semideciduous humid forest. However, the species also occurs in gallery forest, in termite mound thickets, thalwegs, and bush fallow. Lianas are also generally considered to be more prevalent in areas of secondary forest succession and are often able to compete effectively against tree and shrub species under disturbed environmental conditions [24]. Based on the genetic data and distribution models, C. dependens seems not to be strictly associated with tropical rainforest, which might explain why the genetic patterns and distribution modelling do not reflect the UG/LG phylogeographic division. Gallery forests, disturbed forest habitats, and forest edges are currently present throughout savannas, and some of these habitats were most probably also present in the area of the DG during the LGM. Interestingly, evergreen and semi-deciduous rain forest is proposed during the LGM for most of current Nigeria based on paleovegetation data [69], and LGM paleoclimatic models predicted the absence of C. dependens in southwestern Nigeria, which is in line with low haplotype diversity suggesting later colonization. However, given that endemic haplotypes indicate the presence of LGM refugia, it is noteworthy that the population NG02 consists of approximately 95% of the derived endemic haplotype H4. Interestingly, paleodistribution modelling revealed similar occurrence probabilities in south-eastern Nigeria as recovered for the distribution westwards from CVL during HCO using both models (Fig 3C and 3D) and during LGM using the MPI ESM-P model (Fig 3F). This finding suggests a presence of a LGM refugium of particular C. dependens lineages also in evergreen and semi-deciduous rain forest, which is in line with the recognition of several genepools of evergreen forest tree species Erythrophleum ivorense (Fabaceae) [13] in this area.
Conclusions
Results from this study show that past historical factors played an important role in shaping the distribution of Chasmanthera dependens across West Africa. Cameroon Volcanic Line seems to represent a barrier for gene flow in the present as well as in the past, and a uniform gene flow across Nigeria and the Dahomey Gap was observed. Distribution modelling proposed refugia in the Dahomey Gap, supported also by higher genetic diversity and the presence of the derived endemic haplotype H1. This is in contrast to the phylogeographic patterns observed in several tree species and could be explained by either diverging or more relaxed ecological requirements of this liana species.
Supporting information S1 | 2018-04-03T01:45:35.423Z | 2017-03-16T00:00:00.000 | {
"year": 2017,
"sha1": "b20f76b6aba8c111b4255aba63b71752438fd384",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0170511&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b20f76b6aba8c111b4255aba63b71752438fd384",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208421342 | pes2o/s2orc | v3-fos-license | Different stages of the evolution of cerebral aneurysms: joint analysis of mechanical test data and histological analysis of aneurysm tissue
In practical neurosurgery, an important issue is determining the status of the aneurysm and predicting its further growth, rupture or stabilization. The main approaches for the study of risk analysis asessment are computational hydrodynamics and analysis of the mechanics of the wall of cerebral aneurysm. In this paper, an analysis of various sections of the wall of cerebral aneurysm is given, combining mechanical test data and histological examination data. It was shown that, along with significant differences in mechanics, a different degree of calcification is observed in the tissue, which indicates a different level of impaired transport of substances inside the tissue.
Introduction
Cerebral aneurysms (CA) occur in 20-50 people per 1000 of population and depend a little on race and age [1,2] while correlation with gender remains a controversial subject. One of the main challenges in modern neurosurgery is determining the stage of aneurysm for the subsequent estimating of the risks of its rupture. The data of various mechanical experiments show significantly different results depending on the method of transportation of the samples, the sensitivity of the used tensile machine and the method of the experiment [3][4][5][6][7]. It was shown [8] that the difference in the mechanical properties of aneurysms among representatives and between samples with different experimental techniques can be comparable. In this regard, the use of coupled tests for the classification of aneurysm tissues becomes important. In particular, an analysis of the relationship of the collagen fibers direction in a sample with its strength characteristics [9] using a confocal microscope is already used. A light scattering technique is used to determine the distribution of the main direction of the fibers and change this distribution during the mechanical load [10]. In the work [1] the histology of ruptured and unruptured aneurysms were compared, that showed the statistically different percentage of collagen, elastin, smooth muscle and endothelial cells. Meanwhile, in many studies, a classification is made of ruptured and unruptured aneurysms according to various criteria: maximum stress, maximum strain, distribution of the direction of collagen fibers, etc. All these results are indicated at different stages of development of aneurysm tissue, which, as shown, continues to develop for even several years [11]. Despite numerous studies in this area, it becomes clear that the main trigger destabilising the body of the aneurysm is still not elucidated. These days the LIF method [12] is used to estimate the ratio of various proteins (nitrogenous bases), which are natural fluorophores, now is used for the study of cerebral aneurysms. In this paper, we conducted a comparative analysis of the results of a mechanical experiment with the data of the histological analysis of different areas of the aneurysm of the same patient. These areas (dome and neck) represent different stages of development of the same aneurysm, which is valuable experimental data. The ability to examine several tissue samples of the same aneurysm mechanically and histologically was obtained due to the large area of the body of the aneurysm and the developed over time method of its removal.
Methods
All specimens used in the experiment were obtained during the aneurysm clipping surgery in the Federal Neurological Center of Novosibirsk. After the extraction, the specimen is placed into saline of the temperature +2-+5C and is delivered into the experimental laboratory under the same conditions. Overall time from the moment of specimen extraction up to the mechanical experiment does not exceed 24 hours. The mechanical experiment is performed with the universal tensile machine Instron 5944, with specimen being in the biobath filled with solution heated to 37C during all the course of experiment. The specimen is fixed in the specially made clamps covered with sandpaper. The mechanical experiment is performed in several stages with increasing engineering stress. After the test, some of the specimens were sent to histology analysis. On a rotary microtome, transverse histological sections 5 µm thick were made. The preparations are stained with hematoxylin and eosin by the standard method (for review microscopy). Overall five specimens from three patients were studied in this work.
Mechanical test
As a result of the cyclic uniaxial mechanical test with the samples, strain-stress diagrams are given in the Fig. 2 The values of the maximum stress and strain for each of the samples, which are listed in Table 2.
Below are the results of histological analysis of samples.
Fragment T. (dome).
The tissue is represented by a rather large fragment of an aneurysm-like altered artery wall. There are several pronounced subintimal calcification foci with disintegration in the center of these zones. Around large foci, the presence of diffuse microcrystalline calcification is noted. The structure of the layers in the vessel wall is erased, the smooth muscle fibers of the middle layer are dystrophic altered, in several places they have completely lost their typical structure. There is total interfiber swelling of the vessel wall (Fig. 3a).
With high magnification, the presence of total microfragmentation of altered smooth muscle and connective tissue fibers is observed, with the result that in residual interfiber spaces, inclusions appear that are similar (similar) to atheromatous processes (Fig. 4b). That is also accompanied by the appearance of thin-walled blood vessels of small diameter (neovasculogenesis). Figure 4: Cross-section of arterial wall aneurysm (neck): a) zone of intact cellularity (contour dotted) and microvessels (blue arrows); b) loss of fiber structure, aneurysm wall decellularization. Stained with hematoxylin-eosin.
Fragment T. (neck).
From the side of the intima, there is a fragment of the old parietal thrombus in the process of restructuring.,In the neck of the aneurysm, there is a lack of cell nuclei (hypocellular wall) in the stroma and fibers are destructured, with the separate zone on normal cellularity (4a). At higher magnification, the destructurisation of the vascular wall (lack of layer in the structure of the vessel wall) and the initial stages of calcification are observed (Fig. 4b).
Discussion
This study is preliminary and the authors understand the limitations of the obtained results, since we have investigated only one patient using the described technique so far. course of the study we established a similaruty with the results of [1] in the part of reduced celluarity in the samples of unruptured aneurysms. As for the calcification, it was discovered in both specimens that points to insignificant change of this parameter in the direction from the neck of the aneurysm closer to the geometric center of its dome. However, the results of a mechanical experiment show that even minor changes in the histology of the samples can cause significant differences in the maximum strain. According to the [13], this model of loss of cellularity in the wall of aneurysm of a vessel of the brain most of all corresponds to type C (loss of cellularity of the wall, lack of endothelium). We believe that the loss of smooth muscle cells and fibroblastic cells leads to ageing and the gradual destruction of the fibers of the connective tissue framework of the artery wall, which surely entails a change in the mechanical strength in this vascular zone. However, at this stage of research, we see only the consequences of the pathological process (the formation of the aneurysm) while the real reason for the launch of these negative changes is not completely clear.
An interesting fact is that when hemotoxylin-eosin hystology method is used, the general nature of tissue changes in the similar manner to the changes that occur in vascular atherosclerosis, despite the fundamental difference in the physiology of the processes, but for a more thorough comparison it is necessary to study a larger number of samples. | 2019-10-31T09:14:37.579Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "c04c8329e13b6332ed60b38e2b329a8292970bcc",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2019/26/epjconf_epps2018_01028.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0410fe3811f31398af1e0abb4ddb4c98efca2489",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249964223 | pes2o/s2orc | v3-fos-license | A device that facilitates screwing at an appropriate angle in quadrilateral surface fractures: 105-degree drill attachment
Background/aim Within this study, we aimed to investigate the radiological and functional outcomes of acetabular fractures involving quadrilateral surface using 105° drill attachment in the anterior intrapelvic approach. Materials and methods The 35 patients who underwent surgical treatment from January 2016 and January 2020 for acetabular fractures involving quadrilateral surface with anterior intrapelvic approach using 105° drill attachment and a minimum of 12 months of postoperative follow-up were included. Perioperative complications, operation duration, and the quality of reduction were evaluated. Reduction quality was classified as poor, imperfect, and anatomic. Functional evaluation was performed according to the Harris Hip Score (HHS) and Merle d’Aubigne Score. Results Among 35 patients (median age 36 (21–80)), radiological results of the acetabular fixations were anatomic, imperfect, and poor in 28 (80%), 5 (14.3%), and 2 (5.7%) patients, respectively. Postoperative 1-year functional outcomes with Merle d’Aubigne scores and HHS were median 18 (10–18) and 90 (60–96), respectively. The clinical outcomes of the patients showed concordance with reduction quality. The median operation duration was 180 minutes (range 125–270). Iatrogenic neurovascular damage was not noted in any patients. Conclusion Reduction and fixation of deep intrapelvic fractures are risky and difficult due to the narrow anatomy and adjacent crucial neurovascular structures. As the 105-degree drill application is safe and easy to intervene in, short surgery duration and satisfactory results with minimum complications can be obtained with a 105 angulated drill in the deep pelvic region.
Introduction
Anterior, posterior, extensive, or combined approaches are current options for the surgery of acetabular fractures. The modified Stoppa approach has been used more widely in recent years [1]. This approach can be used for fixation of the anterior wall and column, quadrilateral region, and both columns [2]. This approach has extended the possible fixation alternatives by enabling the use of long and vertical infrapectineal plates in addition to classically located suprapectineal reconstruction plates [3]. Fixation of quadrilateral and ischial fractures is possible using the Stoppa approach [4]; however, reduction and fixation might be challenging in such a deep location and also less bone thickness and fragmented fractures may complicate the procedure [5,6]. Today, various implants are available for displaced quadrilateral surface fractures. These implants require further surgical experience [7].
Reduction and fixation of fractures of intrapelvic deep localized areas such as the quadrilateral region and ischial region are riskier and more difficult due to surrounding anatomical structures [6,8]. Care should be taken to protect all neurovascular structures, primarily including corona mortis; obturator artery, vein, and nerve; iliac artery and vein; and deep pelvic veins which have high risks of bleeding and morbidity [1]. In fractures that require infrapectineal plating or in the use of implants used for quadrilateral surface, it is occasionally difficult to fix screws at the appropriate angle with conventional drills and there is a high risk of injury to surrounding anatomical structures. In addition, even with flexible drills, it is usually challenging to adjust the angle of the drill bit and also, during the rotation of the power driller, the flexible body can wrap the surrounding soft tissue such as surrounding neurovascular structures and bladder. With the usage of the 105-degree drill attachment (Milwaukee, Brookfield, USA, 105° right angle driver extension power screwdriver drill bit attachment) surrounding soft tissue protection and drilling at the appropriate angle can be perfectly achieved. The outer part of the body is immobile during drilling and the inner part is rotating, hence the surrounding soft tissue is protected. The distal part of the drill is the mobile component and the only part that needs attention during the bone drilling. 105-degree angled structure of the drill allows directing the drill bit at the most appropriate angle, especially for screw insertion into the quadrilateral surface in the deep intrapelvic area.
This study aims to assess the effectiveness and outcomes of a new surgical drilling technique with the usage of the 105-degrees drill attachment in deep pelvic fractures by investigating the postoperative functional and radiological results of cases who had at least 1-year follow-up data.
Materials and Methods
A retrospective study was performed with the approval of institutional review board approval. A retrospective analysis was carried out of 230 patients with acetabular fractures between January 2016 and January 2020. All data were collected from the electronic data archives, operational notes, and radiographs. Variables such as fixation method, surgery duration, and postoperative complications were recorded. One hundred and fiftythree of 230 patients who were operated with anterior intrapelvic approach were investigated in terms of usage of 105-degrees drill attachment and quadrilateral surface involvement. The inclusion criteria were; patients suffering from both column acetabular fractures involving quadrilateral surface and operated with anterior intrapelvic approach, who have complete physical and radiological examination data records of minimum 12 months of postoperative follow-up. Patients with insufficient data in the medical records, concomitant injuries such as femur or tibia fractures in the ipsilateral lower extremity, pediatric cases, history of hip surgery, open injuries in the pelvic region, and patients who were operated on without usage of 105-degrees drill were excluded from the study. Patients with severe comorbid diseases or conditions which made postoperative compliance unreliable were also excluded. After excluded cases, 35 patients who were operated for acetabular fracture involving quadrilateral surface with anterior intrapelvic approach with the usage of 105-degrees drill bit attachment and who had at least 12 months of postoperative follow-up data were included in the study (Figure 1a,1b,1c,1d,1e,1f,2a,2b,2c,2d,2e,2f).
In our clinic, all pelvic fracture surgeries are performed by two experienced surgeons (C.Y.K., E.G.). In the anterior intrapelvic approach, a vertical midline incision is preferred. After the anterior rectus fascia and rectus abdominis are deepened from the midline, the bladder is protected and the fracture line is reached by using blunt finger dissection on the corresponding pelvic side. Thus, a clear field of view of the quadrilateral surface is obtained. During the fixation of fractures in the deep intrapelvic region such as quadrilateral surface fractures we use 105-degree drill attachment instead of conventional drills in order to provide the safest and most appropriate screw angle.
The patients were followed-up for at least 12 months postoperatively. The median follow-up time was 20 months (12-35). On postoperative day 1; anteroposterior and Judet radiographs, pelvic CT scans were investigated to assess the reduction quality of the fractures by a radiologist experienced in extremities, and the fracture reduction was graded as anatomical (≤1mm displacement), imperfect (>1 to <3 mm displacement), or poor (≥3mm displacement) according to the criteria described by Matta [9,10]. The clinical follow-up was planned for 2 weeks, 1 month, 3 months, 6 months, and 1 year postoperatively. Functional results were investigated according to the Harris Hip Score (HHS) [11] and Merle d' Aubigne and Postel Scoring System [12]. Radiographic and functional outcomes were classified as excellent = 4, good = 3, fair = 2, or poor = 1.
Statistical analysis
The statistical analysis was performed by using SPSS version 22.0 statistical software (SPSS Inc., Chicago, Illinois, USA). The data were analyzed with the Shapiro-Wilk test in terms of distribution pattern. Kruskal-Wallis and Pairwise comparison tests were used for comparison of the nonnormally distributed continuous data. The median, minimum, and maximum values of the data were determined with descriptive analysis.
Results
Among the 35 patients, the median age was 36 (21-80). Median follow-up time was 20 months (12-35). Most of the patients were exposed to high energy trauma, such as motor vehicle accident (60%) and fall from height (%25.7).
Patients were closely monitored and the timing of the operation was decided due to patients' clinical stability. In the preoperative period, patients' comorbid diseases and other concomitant trauma-related problems were consulted to the necessary departments. The median time from trauma to surgery was 5 days (0-9). The median operation duration was 180 min (125-270).
The reduction quality of acetabular fractures on postoperative day 1 was graded as anatomic, imperfect, and poor in 28 (80%), 5 (14.3%), and 2 (5.7%), respectively. According to the Harris Hip Score System (HHS); patients had a median of 90 (60-96) (excellent in 20 patients, good in 10 patients, fair in 3 patients, and poor in 2 patients) ( Table 1). Due to Merle d' Aubigne and Postel Scoring System, the median was 18 (10-18) (excellent in 18 patients, good in 12 patients, fair in 3 patients, and poor in 2 patients) ( Table 2). In terms of the Merle d' Aubigne Score, there was a significant difference between stage 1 and stage 3 (p = 0.014), while there was a significant difference between stage 1 and stage 2 (p = 0.006). There was no significant difference between stages 2 and 3 in terms of this scoring system. In terms of Harris hips score, there was a significant difference between the patients whose reduction quality was stage 1 and those with stage 3 (p = 0.024), while there was a significant difference between stage 1 and stage 2 (p = 0.010). No statistically significant difference was observed between stages 2 and 3 in this group. Also, there was no statistically significant difference between the groups in terms of the operation time, follow-up time, and time to surgery. Two patients had poor clinical outcomes and these patients were over the age of 65 years and had poor and imperfect reduction quality, respectively. These patients underwent total hip arthroplasty afterward.
Iatrogenic neurovascular damage and postoperative deep infection were not noted in any patients operated with a 105-degrees drill attachment. There was no articular screw penetration in postoperative CT scans. Heterotrophic ossification was also not observed in any of the patients. Postoperative deep vein thrombosis developed in 2 patients with comorbid diseases.
Discussion
Because of the intra-articular nature of the acetabular fractures, the main objectives of surgical treatment are to provide anatomical reduction without articular step-off and achieve stable fixation, to start joint movements in early stages, and to gain joint functions as soon as possible. Studies have shown that even millimetric displacement may result in progressive posttraumatic osteoarthritis and that clinical and radiological results will not be satisfactory [9,10]. However, ensuring anatomical reduction is often difficult due to the three-dimensional complex anatomy of the acetabulum and pelvis [13]. A wide variety of reduction and fixation materials can be used during surgery. However, it may not be possible to orient the screw direction at the desired angle in the deep pelvic region. The main difficulties are often accompanied by medial protrusion of the femoral head, a high degree of disintegration, as well as dome impaction [14]. Pelvic brim plates are successful in preventing medial displacement, but it is quite difficult to insert the screws into these plates periarticularly [15]. The application of pelvic brim plate was first mentioned by Hirvensalo et al [16]. Later, an infrapectineal plate was described by Cole and Bolhofner for the treatment of acetabular fractures involving quadrilateral surface with a modified Stoppa approach [1]. They stated that one should be careful not to place the screws directly adjacent to the quadrilateral surface as the screws may penetrate the joint. The authors also indicated that care should be taken to protect all neurovascular structures, primarily including corona mortis; obturator artery, vein, and nerve; iliac artery and vein; and deep pelvic veins which have high risks of abundant bleeding and morbidity. In addition, acetabular fracture surgery requires surgical specialty due to the complex pelvic anatomy and therefore it has a prolonged and steep learning curve [17][18][19]. Fixation of fractures of the quadrilateral surface is challenging due to the position of the plate in the lesser pelvis [6,8]. It is difficult to drill holes in the appropriate direction from the deep pelvic region when drilling holes in plates located in the deep pelvic region. There are various drills that can be used in these surgeries ( Figure 3a,3b,3c,3d). Especially when using a conventional drill, adjusting the drill orientation is often difficult due to both bladder and retractors (Figure 4). Flexible drilling requires both hands in order to position the drill bit at the appropriate angle, hence it is difficult to use both hands in a narrow deep pelvic region ( Figure 5). With our new drilling technique, we use the advantage of being able to fixate using one hand. And also when using a flexible drill, the body of the drill can wrap the surrounding soft tissue. The use of the 105° angle drill allows the screws to be placed at any appropriate angle in the deep intrapelvic region. Another important issue is that the body part of the drill is protected and minimizes the possibility of winding in the surrounding tissues during the drilling process ( Figure 6, 7a,7b). Due to the accumulation of abdominal subcutaneous fat tissue, management of acetabulum fractures in a deeper hole are more challenging in obese patients. Additional assistant surgeons and special surgical equipment are often needed to aid in soft tissue retraction [6]. The 105° angle drill gains more importance in obese patients.
T-shaped, anterior column and posterior hemitransverse, both columns, posterior column, and combined transverse fractures are most often associated with medial migration of the quadrilateral regions [20,21]. In our study, in order to standardize fractures, we only investigated patients with both column fractures involving quadrilateral surface.
In the literature, there are many studies reporting satisfying outcomes. Sagi et al. reported that excellent/ good acetabular fracture reduction was achieved in 92% of 57 cases and clinical outcomes were excellent/good (91 %) according to Merle d' Aubigne Score [22]. Isaacson et al. reported an anatomic or good reduction in 92% of 36 cases and good/excellent clinical results, with a Merle d' Aubigne score of 82% [23]. Hirvensalo et al. reported excellent or satisfactory fracture reduction quality in 84% of a series of 164 cases and good/excellent HHS results in 75% of cases [16]. Liu et al. reported 16 excellent or good fracture reduction that was obtained in 92% of 24 cases, and good/ excellent clinical HHS results in 93% [3].
Correspondingly, in our study, the anatomic reduction was achieved in 28 (80%) patients. The clinical results were excellent/good in 30 (85.7%) cases according to Modified Merle d' Aubigne Score System and similarly, according to HHS, excellent/good results were investigated in 30 (85.7%) cases. The clinical outcomes of the patients showed concordance with reduction quality according to our clinical experience.
In our clinic, we also use conventional and flexible drills but especially in fractures of the deep intrapelvic region we use a 105-degree drill that can be used with different size drills in the deep intrapelvic region (Figure 8). In this study, there was no articular penetration of a screw in postoperative control CT scans. Neurovascular damage was not noted in any cases operated with a 105-degrees drill attachment. When the literature is investigated, contralateral results have been reported in terms of neurological damage. Sagi et al. reported paralyzed obturator nerve in 13 patients (26%) postoperatively [22]. Also, Ma et al. and Laflamme et al. reported 2 (6.7%) and 1 (4.8%) patients with obturator nerve palsy after acetabular surgery with Modified Stoppa approach, respectively [5,24]. Except for preserving the joint from screw penetration, we observed that the biggest advantage of the 105-degree drill is being more effective in protecting the neurovascular structures.
Sagi et al. conducted a study with a group of 57 patients operated with an anterior intrapelvic approach and reported a mean operation time of 263 min [22]. In our study, we observed that the median operation time was 180 min (125-270). We think that the operation duration is shorter when compared to the literature findings as it enables fixation at one time and in the most appropriate screw position without multiple drills. [1,16,22]. In our study, deep infection was not observed in any patients. We think that the short operation time achieved with the 105-degree drill attachment technique and compliance with surgical sterilization rules in acetabular surgery operations in our clinic enabled us to achieve this result.
The small sample size, the retrospective nature of our study, the absence of a control group, and the inability to compare the results and surgical duration of similar patients treated with conventional drill were the major limitations. The absence of patient-related health quality of life measurement is also a limitation of the study.
Conclusion
According to our clinical observations, with the usage of 105-degree drill attachment, radiological and clinical results were gratifying whereas complication rates were significantly less. Using a 105-degree drill attachment in deep intrapelvic fractures, which are difficult to intervene like the quadrilateral surface, allows drilling and screw insertion at the most appropriate angle, shortening the operation time and protecting the surrounding soft tissues.
Conflicts of interest
All authors declare that there is no conflict of interest in this study.
Informed consent
There is no inconvenience in terms of the ethics of scientific research in terms of the applicability of the research, provided that the permissions based on the declaration are obtained and the data forms declared in the application form are not exceeded. Approval for this application has been granted. Decision number: 158, decision date: 22.07.2020 | 2022-06-24T15:06:57.812Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "a6eb797ae9eaa33b7cb567881a54023128c0b9cd",
"oa_license": "CCBY",
"oa_url": "https://journals.tubitak.gov.tr/cgi/viewcontent.cgi?article=5378&context=medical",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c9f760257d855b19f0215ffb1fded5ce3e52607",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
226270781 | pes2o/s2orc | v3-fos-license | Is older age associated with COVID-19 mortality in the absence of other risk factors? General population cohort study of 470,034 participants
Introduction Older people have been reported to be at higher risk of COVID-19 mortality. This study explored the factors mediating this association and whether older age was associated with increased mortality risk in the absence of other risk factors. Methods In UK Biobank, a population cohort study, baseline data were linked to COVID-19 deaths. Poisson regression was used to study the association between current age and COVID-19 mortality. Results Among eligible participants, 438 (0.09%) died of COVID-19. Current age was associated exponentially with COVID-19 mortality. Overall, participants aged ≥75 years were at 13-fold (95% CI 9.13–17.85) mortality risk compared with those <65 years. Low forced expiratory volume in 1 second, high systolic blood pressure, low handgrip strength, and multiple long-term conditions were significant mediators, and collectively explained 39.3% of their excess risk. The associations between these risk factors and COVID-19 mortality were stronger among older participants. Participants aged ≥75 without additional risk factors were at 4-fold risk (95% CI 1.57–9.96, P = 0.004) compared with all participants aged <65 years. Conclusions Higher COVID-19 mortality among older adults was partially explained by other risk factors. ‘Healthy’ older adults were at much lower risk. Nonetheless, older age was an independent risk factor for COVID-19 mortality.
Introduction is an emerging infectious disease caused by the novel coronavirus SARS-CoV-2 and has a wide spectrum of manifestations ranging from asymptomatic infection to severe pneumonia and respiratory failure. As of mid-August 2020, the COVID-19 pandemic has infected over 20 million people globally and caused at least 750,000 deaths [1].
Preliminary reports have shown that older people were at a higher risk of COVID-19 complications with higher rates of hospitalisation, intensive care unit admissions, intubation, and death [2][3][4]. Currently, it is unclear whether chronological age per se is an independent risk factor for severe COVID-19, or simply that risk factors are more common among older adults. Also, the mechanisms through which older age may predispose to poorer prognosis have yet to be elucidated. Several hypotheses have been proposed as to why older people might be more susceptible to severe COVID-19 infection, including a weaker immune response [5], obesity [6], age-related decline in respiratory function [6], frailty [7], and multimorbidity [8,9].
These questions cannot be answered using hospital studies due to selection biases in testing and admission, nor using administrative databases because of insufficient information on confounding and mediating factors. Therefore, we used UK Biobank, a large, general population cohort study with rich pre-infection data, to identify factors that help explain the association between age and COVID-19 mortality and determine whether age per se is a risk.
Methods
UK Biobank recruited over 502,000 participants aged 37 to 73 years (47 to 85 years as of 1 March 2020) at 22 assessment centres across England, Scotland, and Wales between March 2006 and December 2010. We excluded all participants known to have died prior to 1 March 2020, before the COVID-19 pandemic reached the UK.
UK Biobank received ethical approval from the North West Multi-Centre Research Ethics Committee (REC reference: 11/NW/03820). All participants gave written informed consent before enrolment in the study, which was conducted in accord with the principles of the Declaration of Helsinki.
Outcomes
COVID-19 death records were based on death certificates, available on all participants up to 30 June 2020. COVID-19-related deaths were defined as ICD-10 codes U07.1 or U07.2 on the death certificates.
Exposures
At baseline assessment, biological measurements were taken, and data were collected via both a self-administered touch-screen questionnaire and research nurse led interview according to a standardised protocol at a median 11.1 (interqartile range 10.4-11.8) years before 1 March 2020. Current age (on 1 March 2020) was derived from date of, and age at, recruitment and was trichotomised into <65, 65-74 and 75 years. Ethnicity, smoking, medical history and medication use were self-reported at baseline. For the present analyses, ethnicity was classified as white or other, due to insufficient participants in the non-white groups (n = 25,186, <6%).
Smoking status was categorised into current/former smoker and never smoker. Systolic blood pressure (SBP) were measured at the baseline assessment using automated measurements (or manual if unavailable), and the mean of available measurements derived. Area-level socioeconomic deprivation was based on the Townsend score of the participant's home postcode derived from Census data on: unemployment, non-car ownership, non-home ownership and household overcrowding. Higher Townsend scores represent greater socioeconomic deprivation [10].
Body mass index (BMI) was derived from measured body mass in kilograms divided by height squared, measured in metres. Height was measured, without shoes and socks, using a Seca 202 height measure. Weight and whole-body fat mass and fat free mass were measured to the nearest 0.1 kg using the Tanita BC-418 MA body composition analyser.
Lung function was assessed by spirometry using a Vitalograph Pneumotrac 6800 spirometer (Vitalograph, Buckingham, UK). Participants did not perform spirometry if they were pregnant, on medication for tuberculosis or had a history of: chest infection (in the last month); detached retina; myocardial infarction; eye, chest or abdominal surgery (in the last three months); or collapsed lung. The aim was to record two acceptable blows from a maximum of three attempts. The spirometer software compared the acceptability of the first two blows and, if acceptable (defined as �5% difference in FVC and FEV 1 ), the third blow was not required. In the moderation analyses, we used the height-, sex-, and ethnicity-specific predicted FEV 1 value at 65 years of age from the Global Lung Function Initiative (GLI) [11] as the cut-off value to define normal versus low FEV 1 , because participants who were 75 years of age during the pandemic were around 65 years of age at baseline.
The Fried classification uses five criteria: weight loss, exhaustion, physical inactivity, slow walking speed and low grip strength. Grip strength was measured using a Jamar J00105 hydraulic hand dynamometer and the mean was derived from the right and left hand values expressed in kilograms. Self-reported walking pace was categorised as slow, average, or brisk. An adapted version of the frailty classification derived by Fried et al. was used in this study [12]. Participants were classified as frail if they fulfilled three or more criteria, prefrail if they fulfilled one or two criteria and robust (non-frail) if they did not fulfil any criteria.
The information collected on long-term conditions (LTCs) during the nurse-led interview (full list contained in S1 Table) was converted into the total number of LTCs for each participant.
Statistical analyses
Means and standard deviations were reported for continuous variables and numbers and percentages for categorical variables. Poisson regression models with robust standard errors were used to analyse the associations between risk factors and COVID-19 mortality, with the results reported as risk ratios (RRs) and 95% confidence intervals (CIs) [13]. Poisson regression models were used instead of logistic regression because they provide RR estimates which aid clinical interpretation.
We used penalised thin plate regression splines to model the association between age and COVID-19 mortality as it may not be linear [14]. Splines were chosen over fractional polynomials in capturing deep curvatures [15]. Penalised thin plate regression splines provide more robust results than cubic splines as knot locations do not need to be chosen [16].
The main analyses were adjusted for potential confounding factors: sex, ethnic group, deprivation index and smoking status. We studied four groups of potential mediators: physical (BMI, SBP), respiratory (FEV 1 and FEV 1 / FVC ratio), frailty (non-frail, prefrail and frail), and number of LTCs. These factors were included as covariates in the Poisson models to determine whether, and to what extent, the RRs between age and COVID-19 were attenuated. In addition, mediation analysis under counterfactual framework was also conducted [17]. To avoid multicollinearity and unnecessary adjustment between potential mediators, potential mediators were selected in a stepwise process. Firstly, COVID-19 was regressed by current age, all potential mediators, and confounding factors in a Poisson model. Only the potential mediators reaching statistical significance (α = 0.05) were further investigated. Factors that were of high effect sizes (RR <0.9 or RR >1.1) were also considered. The selected potential mediators were then regressed by age and other covariates (mediator model) in either multiple Poisson (for binary mediators) or linear (for other mediators) models adjusting for each other and for sociodemographic factors. The outcome and mediator models were then combined to compute the natural indirect effect (NIE) and total effect (TE) for each participant which was then averaged. Quasi-Bayesian estimation with 1,000 iterations were used for estimating the 95% CI and p-values of the NIE and TE. Mediation proportion was calculated as NIE / TE.
The potential moderating role of risk factors in the association between age and COVID-19 mortality was studied in a series of subgroup analyses using combinations of age (<64, 65-74, and �75 years) and risk factors: current/former smokers, low FEV 1 (< height-, sex-, and ethnicity predicted value at 65 years old [11]), obesity (BMI >30 kg/m 2 ), hypertension (SBP �140 mmHg, DBP �90 mmHG or antihypertensive medication), frailty (prefrail and frail), and �3 LTCs. These variables were categorised for easier interpretation. The interactions between risk factors and age group were tested using likelihood ratio tests comparing models with and without the interaction terms. Each risk factor was combined with age group and the RR derived for each permutation referent to participants aged <65 years of age and without the risk factor. This was repeated for the combinations of age and total number of risk factors.
Missing data were handled using complete case analysis. All analyses were conducted using R version 4.0.2 with package mgcv and mediation.
Results
Of the 502,506 UK Biobank participants, we excluded 29,295 who died prior to 1 March 2020 and 3,177 who had incomplete data on potential confounding factors, resulting in 470,034 participants being included in the analyses of COVID-19 mortality (S1 Fig). Overall, 438 participants died of COVID-19.
Older particpants were less deprived, less likely to be current smokers, more likely to be frail, and had higher SBP, lower handgrip strength, poorer lung function, and more LTCs (Table 1). Participants who died of COVID-19 were older, less likely to be white, more likely to smoke, be male, obese and frail, and had more LTCs, higher SBP and poorer lung function (Table 1).
Current age was associated exponentially with COVID-19 mortality (Fig 1). Adjusting for physical (Model 2), respiratory (Model 3), and LTC (Model 5) covariates attenuated the association. There was no evidence of a non-linear association between age and the logarithm of mortality risk.
After adjusting for potential confounding factors and other age-related risk factors, only BMI, SBP, handgrip strength, FEV 1 , and �3 LTCs were significantly associated with COVID-19 mortality. BMI was excluded in the mediation analysis as it was inversely associated with age after adjustment for other potential mediators. Therefore, mediation analysis was conducted on FEV 1 SBP, handgrip strength, and LTCs. These factors collectively accounted for 39.3% of the association between older age and COVID-19 mortality (Table 2).
There were statistically significant interactions between age group and all risk factors in relation to COVID-19 mortality, except for frailty. Fig 2 shows the associations with COVID-19 mortality of different combinations of age and risk factors. Compared with participants <65 years of age who had never smoked, participants �75 years of age had a higher risk even if they had never smoked (RR 13.03, 95% CI 7.85-21.62, P<0.0001) and higher still if they had ever smoked (RR 19.68, 95% CI 12.05-32.14, P<0.0001). A similar pattern was observed for FEV 1 , obesity, hypertension, and number of LTCs. Overall, participants aged �75 years were at 13-fold (95% CI 9.13-17.85) mortality risk compared with those <65 years. The association between number of risk factors and COVID-19 mortality was stronger among older participants. Participants aged �75 years with no additional risk factors (smoking, low FEV 1 , obesity, hypertension, frailty, and multiple LTCs) had 12-fold mortality risk (RR 12.13, 95% CI 2.79-52.66, P = 0.0009) compared with those aged <65 years with no risk factors, and had 4-fold mortality risk (95% CI 1.57-9.96, P = 0.004) compared with all participants aged <65 years (S2 Table).
Principal findings
This study demonstrated an exponential association between age and COVID-19 mortality. Over one-third of older adults' excess mortality risk was mediated by poorer lung function, hypertension, muscle weakness, and multiple LTCs. Among older participants, these factors were both more common and more strongly associated with higher COVID-19 mortality.
FEV 1 is a commonly used marker of respiratory function [18,19]. It is used to diagnose chronic obstructive pulmonary disease (COPD) but is also associated with mortality independent of clinical disease [18]. FEV 1 generally peaks in early adulthood and declines with age beyond 30 to 40 years of age [19]. However, there is a large variation in the peak value and age-related rate of decline due, in part, to lifestyle factors such as smoking, obesity, and physical activity [20]. The mechanism underlying the relationship between FEV 1 and COVID-19 merits further study but may be due to people with poorer FEV 1 having less cardiorespiratory reserve to buffer against the immune-mediated lung response to COVID-19 infection [6].
LTCs are more common among older adults and have been shown to be associated with poorer functional health [21] and poorer outcomes in COVID-19 [22,23]. This is consistent with other infectious diseases [24][25][26]. The association between LTCs and increased risk of COVID-19, as in other infectious diseases, could be related to shared biological pathways such as chronic low-grade inflammation [27,28] and attenuated immune response [29].
Strengths and limitations
This study used a large, general population cohort that provided extensive pre-infection data on sociodemographic factors, physical measurements, LTCs, and respiratory function. Therefore, we were able to take account of multiple confounders, identify potential mediators and undertake sub-group analyses. However, there are several limitations to this study. COVID-related deaths relied on records on death certificates and it is possible that a small number of participants who died of COVID-19 were miscoded. However, as we have included both confirmed (ICD-10 U07.1) and suspected (ICD-10 U07.2) cases, such misclassification should be minimal. All analysed risk factors, excluding age, were assessed 10 years prior to the COVID-19
PLOS ONE
Is older age associated with COVID-19 mortality in the absence of other risk factors? pandemic and may have changed over time. Any deterioration in these factors over time is likely to be greater in older age-groups and, therefore, the findings are biased towards the null. No participants in UK Biobank are currently aged >85 years and therefore our findings should not be generalised to people over 85 years of age. The UK Biobank cohort is not completely representative of the general UK population [30,31]. However, effect sizes, such as risk ratios reported in this study, are still generalisable [32]. As with other observational studies, residual confounding may exist. The mediation analyses conducted assumed no causal relationship between mediators and thus could not detect sequential mediation.
Comparison with existing studies
The majority of studies on ageing and COVID-19 have been based on hospital samples and focused on complications or case fatality. It was reported that those who required admission to intensive care units were on average 15 years older and more likely to have underlying comorbidities [33]. A recent meta-analyses of 33 studies conducted on a total of 3,027 patients with COVID-19 showed that adults older than 65 years were five times more likely to become critical or die [23]. In the US, it was estimated that COVID-19 related hospitalisation was lowest among 0-17 year-olds and increased almost linearly with age from 2.5 per 100,000 among people 18-49 years to 17.2 per 100,000 among those �85 years [2]. This is in contrast to our present findings that people aged below 70 shared similar risk. The inconsistency may be due to the lower test rate in the UK where tests have, so far, been largely confined to people with more severe symptoms or to the fact that the minimum current age in UK Biobank was 50 years.
PLOS ONE
Is older age associated with COVID-19 mortality in the absence of other risk factors?
Conclusions
Our findings suggest that the risk of COVID-19 mortality is higher in older adults. In this cohort, over one-third of this excess was due to older adults being more likely to have other risk factors (e.g. poorer lung function and hypertension) and these risk factors conveying a stronger risk of COVID-19 death among older people. Nonetheless, older age was associated with COVID-19 mortality independent of other risk factors.
Currently, everyone over 70 years of age is classified as being at moderate risk from COVID-19 irrespective of their general health [34]. As such they are recommended to be more stringent in following social distancing. Our study findings suggest that efforts to protect older people should prioritise those who have additional risk factors. | 2020-11-07T14:06:47.552Z | 2020-11-05T00:00:00.000 | {
"year": 2020,
"sha1": "7326ea24c524f83401633e54222caea41a5bd292",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0241824&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54a70bd989cc11aa50577db63ad9bf79b96f6053",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16614613 | pes2o/s2orc | v3-fos-license | The impact of maternal experience of violence and common mental disorders on neonatal outcomes: a survey of adolescent mothers in Sao Paulo, Brazil
Background Both violence and depression during pregnancy have been linked to adverse neonatal outcomes, particularly low birth weight. The aim of this study was to investigate the independent and interactive effects of these maternal exposures upon neonatal outcomes among pregnant adolescents in a disadvantaged population from Sao Paulo, Brazil. Methods 930 consecutive pregnant teenagers, admitted for delivery were recruited. Violence was assessed using the Californian Perinatal Assessment. Mental illness was measured using the Composite International Diagnostic Interview (CIDI). Apgar scores of newborns were estimated and their weight measured. Results 21.9% of mothers reported lifetime violence (2% during pregnancy) and 24.3% had a common mental disorder in the past 12 months. The exposures were correlated and each was associated with low education. Lifetime violence was strongly associated with Common Mental Disorders. Violence during pregnancy (PR = 2.59(1.05–6.40) and threat of physical violence (PR = 1.86(1.03–3.35) and any common mental disorders (PR = 2.09 (1.21–3.63) (as well as depression, anxiety and PTSD separately) were independently associated with low birth weight. Conclusion Efforts to improve neonatal outcomes in low income countries may be neglecting two important independent, but correlated risk factors: maternal experience of violence and common mental disorder.
rarer, but with results in the same direction [2][3][4]. For antenatal mental disorder, the pattern of findings suggests that socio-economic status may be an effect modifier, with associations with low birth weight only being apparent in more deprived communities [5][6][7][8][9]. Although previous studies have looked at the effect of these exposures separately, they are, in fact, closely related. In a meta analysis the weighted odds ratios for the association of different mental disorders with violence among women varied from 3.5 to 5.6 [10]. The aim of this study is to describe, among disadvantaged adolescent Brazilian mothers, the association between these two exposures and their independent and interactive effects on newborn outcomes: low birth weight, small for gestational age, preterm birth, still-birth, and Apgar scores.
Sample and setting
The study was carried out in the Hospital Maternidade Mario de Moraes Altenfelder, the only public hospital providing obstetric care to people living in a poor neighbourhood in the north of São Paulo. Consecutive adolescents (11 to 19 years old) admitted to the hospital for obstetric care between 24/7/2001 and 27/11/2002 were invited to participate.
Measurements
Data were collected through interviews in hospital after the women had recovered from labor and the effects of anaesthesia. This period varied between 4 to 48 hours after delivery.
Experience of violence
Violence was assessed using the relevant section from the Californian Perinatal Assessment [11,12]. The questions, which were translated to Portuguese and back-translated to English to ensure semantic and content validity, are: 1. Sometimes women (girls) are physically attacked by another person. Have you ever been attacked with a gun, knife or other weapon, either by a family member or a lover or friend, or by a stranger? (Subsequent questions also included a reminder to consider each of these potential perpetrators) 2. Have you ever been attacked by anyone without a weapon but with the intent to seriously injure you?
3. Have you ever been threatened with the intent to seriously harm or injure you? 4. Has anyone ever made you have any kind of sex by using physical force or by threatening to harm you? 5. Did any of these incidents occur during your pregnancy? 6. Did you ever ask for police help or for a restraining order due to domestic violence?
The first two questions were combined to generate the variable 'any physical violence' and questions one to four were combined to generate the variable "any lifetime violence". Item 4 was used to define "any sexual violence". Item 5 was used to define the exposure of any type of violence experienced during pregnancy.
Mental health
Mental disorders were assessed using the Composite International Diagnostic Interview (CIDI 2.1 version). The interview has been validated for use in Brazil [13,14] and all interviewers attended the accredited CIDI training centre. The primary outcome was Common Mental Disorders (CMD) in the previous 12 months; defined as all those who had a diagnosis of depression, anxiety, post-traumatic stress disorder (PTSD), somatoform or dissociative disorders at any time in the past 12 months according to the Diagnostic and Statistical Manual of Mental Disorders -4 th version (DSM-IV).
Newborn outcomes
Five outcomes were considered: still-birth; pre term birth, Small for gestagional age (SGA), low birth weight and low Apgar scores. Babies were weighed immediately after delivery by a pediatrician using a digital scale with a precision level of 10 grams. Low birth weight was defined as < 2500 grams. A low Apgar score was defined as below 7 at 5 minutes [15]. Gestational age was calculated using the date of last menstrual period reported by participants using the New Ballard method [16]. A cutoff point of less than 37 weeks of completed gestation was used to define prematurtity [17]. Small-for-gestational-age birth was defined as birth weight below the 10 th percentile for expected weight according to gestational age [17], using, in the absence of any Brazilian reference data, a Canadian [18] population based gender-specific reference for birth weight.
Potential confounders, mediators and covariates
We also asked about the participants' age, education, socio-economic status and living arrangements. A Brazilian classification of social-economic class [19] was used, which takes into account the head of the household's education and the number of domestic electric tools in the household. It classifies individuals into five different categories (A to E) recoded to three categories: high (A and B), middle (C) and low (D and E). Obstetric history included number of pregnancies, alcohol and tobacco intake during pregnancy, pre-existing diseases (hypertension, diabe-tes and any 'lung, heart or kidney' diseases), and pregnancy complications (pregnancy induced hypertension, early labour, placenta praevia, placental abruption).
Ethics
Written informed consent was sought after explanations about the aims, potential risks and benefits of the research. Interviewers offered referral to the social and mental health team of the hospital or other agencies, as appropriate. The study was approved by the ethical committee of the hospital (Hospital Maternidade Mario de Moraes Altenfelder) and the ethical committee of Federal University of Sao Paulo.
Statistical analysis
Prevalence ratios (PR) and 95% confidence intervals were calculated for associations of violence exposures with maternal and newborn health outcomes. Poisson regression with robust variance [20] was used to estimate Prevalence Ratios and to adjust for the effect of other variables; associations of maternal mental health with violence were adjusted for age and education, while associations of maternal mental health and violence with low birth weight were initially adjusted for potential confounders (age, education, baby gender, parity, pre-existing conditions and maternal mental disorders/maternal exposure to lifetime violence), and incrementally for potential mediator factors (complications during pregnancy, alcohol intake and smoking during pregnancy and number of ante-natal consultations). An interaction between violence during pregnancy and CMD in the last 12 months was tested in the final model using a likelihood-ratio test.
Results
One thousand and two pregnant adolescents were admitted to the hospital during the study period, representing 24.4% of the 4108 women admitted for obstetric care. One thousand adolescents agreed to participate, of whom 70, admitted for miscarriage were excluded entirely from this study (n = 930). There were eight twin pregnancies, *ANEP, 1997 (based on the electrical equipment in the household and the educational level of the head of the family † test for trend and ten still births; Apgar scores, pre term births and small-for-gestational-age births were studied only in singleton live births leaving 912 mother/child dyads for these analyses. Low birth weight was restricted to singleton live births with 37 weeks of gestational age or above leaving 795 mother/child dyads for these analyses. A third of the mothers were 16 or younger (Table 1). Twothirds had completed fewer than eight years of schooling and almost half had a family income less than 400 Reais (US$120). Most cohabited with a partner. The majority had not planned their pregnancy, and were having their first baby ( Table 2). One in five mothers had complications during pregnancy. Drinking and smoking were relatively uncommon. Only 4.5% (n = 42) of the participants did not present antenatally, but 30% had had fewer than the recommended six antenatal consultations. One hundred and thirty one adolescents (14.2%) had pre-term babies. 40.1 percent (n = 370) had a normal vaginal delivery while 32.2% (n = 297) had a forceps delivery and 27.7% (n = 256) had a caesarean section.
Experience of violence
203 participants (21.8%) had experienced one or more types of violence at some time in their lives. The most common category was actual physical violence, experienced by 14% (n = 130) (evenly distributed between with and without a weapon), with threats of physical violence reported by 10% (n = 95) and 5% (n = 48) reporting sexual violence. Most of the violence went unreported; only 4% of those who suffered violence with a weapon and 17% of those reporting sexual violence had sought police help. Only 2% of mothers (n = 19) experienced violence during pregnancy; 26% of these had asked for police help.
Mental health during pregnancy
Two hundred and twenty six participants (24.3%) were diagnosed with a Common Mental Disorder in the previous 12 months. The most common diagnosis was depression (13.0%) followed by PTSD (9.8%), anxiety disorders (5.7%), somatoform disorder (1.8%) and dissociative disorder (0.3%). There was much comorbidity between depression, anxiety and PTSD: 32.0% of those with PTSD and 39.6% of those with anxiety also had depression.
Associations of violence and CMD with potential confounders and mediators
Low education and higher alcohol intake in pregnancy were associated with lifetime violence and CMD (tables 1 and 2). Higher parity was associated with lifetime violence (table 2), and pre-existing physical conditions and smoking with CMD. There was a non-significant trend for an association between CMD and premature birth.
Association of maternal experiences of violence and common mental disorders with newborn outcomes
Ten babies (1.1%) were still born. Only one of these mothers had experienced lifetime violence (not during the pregnancy). There was no association between CMD and still birth, crude PR = 1.33 (95%CI 0. 35-5.12). Apgar scores at five minutes were low (below 7) in 146 babies (16% One hundred and forty one babies (15.5%) had low birth-weight (LBW). Among those over 37 weeks of gesta-tional age (n = 795) there were 62 babies with LBW (7.8%). There was a statistically significant trend for the risk of LBW to decrease with the number of ante-natal consultations (p-value, test for trend = 0.03). LBW was also strongly associated with pregnancy complications (crude PR = 1.85, 95% CI 1.10-3.10). Table 3 summarizes the associations of violence and CMD with low birth-weight. There was a statistically significant crude association of all types of violence with low birth weight. However, after adjustment for potential confounders, only threat of physical violence and violence during pregnancy remained associated. After adjusting also for potential mediators (gestational age, complications during pregnancy, alcohol intake and smoking during pregnancy and ante-natal consultations) both any violence during pregnancy (PR = 2. The effects of violence during pregnancy and CMD in the last 12 months on birth weight were additive rather than multiplicative; with no statistical interaction in the final model (p = 0.31) (interaction term PR = 0.35 (95%CI 0.11-2.00). One hundred and seventy seven adolescents
Discussion
Brazilian adolescents (10-19 years old) comprise 21% of the population [21]. Adolescent pregnancies are associated with poorer perinatal outcomes including low birth weight [22]. We found that violence and common mental disorders were common and correlated exposures among adolescent mothers attending a public obstetric hospital. Violence before pregnancy was not associated with low birth weight, unless there was a sexual element. Violence during pregnancy was less common, but was robustly associated with low birth-weight. Despite the strong associations between experience of violence and common mental disorders, the effects of each upon low birth weight were largely independent. Violence in pregnancy was associated with small-for-gestational-age but not with pre-term birth, whereas CMD was associated with preterm birth but not SGA. Neither common mental disorder, nor experience of violence seemed to be associated with other adverse neonatal and pregnancy outcomes including pregnancy complications, still birth and Apgar scores.
The main limitation of our study is the timing of the interviews with mothers. The measurement of mental health shortly after childbirth may be confounded by emotional experiences common after childbirth. Also, recall bias is a possibility when exposures are ascertained after the outcomes have occurred. Arguably, this is more likely for still birth than for the less striking outcomes of low Apgar scores and low birth weight. However, we used a structured mental health diagnostic interview, delivered by trained interviewers who ensured that interviews were only carried out after the mother had fully recovered from childbirth. This method has been used in other studies [3,23]. Antenatal interviews would have been logistically difficult because of the patchy nature of antenatal care. Furthermore, episodes of violence and mental disorder following the interview and prior to childbirth may be missed. Maternal CMD may also have biased recall of violent events in the past. This may have led to an overestimation of the association between these two exposures. However, this should not affect our main findings and makes the finding of an independent association of both exposures with LBW even more striking. Although we adjusted for most recognized correlates of poor maternal mental health, violence and low birth-weight, the possi-
association of lifetime violence and Common Mental disorders with low birth-weight (LBW) in adolescents who have delivered term live babies in a public hospital in Sao Paulo n = 795.
Low BTW in mothers with the exposure N/ N total(%) Low BTW in mothers without the exposure N/N total(%)
Experience of Violence
Unadjusted PR 95% CI Adjusted PR* 95% CI Adjusted PR** 95% CI bility of residual confounding cannot be excluded. Maternal Body Mass Index (BMI), which is associated with neonatal adverse outcomes [24] might have been a mediator in our study, linking both exposures to low birth weight. Maternal BMI was not assessed, so unfortunately we could not explore this possibility. We have used a population based Canadian reference for calculating SGA [18]. Although this is not optimal, a Brazilian reference does not exist. Gestational age was calculated using the date of last menstrual period reported by participants. Again this method is not ideal, but it is widely used in developing countries as the most reliable measure of gestational age, when, as was the case in our study, ultrasound estimates are not available. On the other hand, our study had a high proportion responding, a large sample (compared to most other studies in this field), and used standardized and validated measures of the exposures and outcomes.
The 22% prevalence of lifetime physical violence in our study is consistent with other estimates; 25% in India [25], 18% in China [26], and 13.1% [3] and 33.5% [2] in two Latin American studies of pregnant women. However, the prevalence of physical or sexual violence during pregnancy in our study (2%) is amongst the lowest reported, with other studies reporting prevalences ranging from 3.5% to 20% [3,[26][27][28]. One explanation may be that a significant proportion of our young mothers were still in the parental home, rather than on their own with a potentially violent partner. The 24% prevalence of CMD among pregnant adolescents is similar to that typically found among pregnant adults in developed countries [29] and in the one previous study from Brazil [30]. Violence and CMD were highly correlated with each other in this population. Studies from developed countries clearly demonstrated that violence during pregnancy was associated with an increased risk for maternal mental disorders [31]. Women who experienced violence were more likely also to experience a range of other gendered disadvantages and this may act to create on oppressive atmosphere which results in poor mental health [32].
Very few studies have examined the impact of violence on newborn outcomes in developing countries. Nasir et al's review [33] reported three studies, and we were able to identify three more [2][3][4] [28] found a greater risk of LBW among abused adult women compared to adolescents in a disadvantaged community in America. There is a negative report from China [26], but only violent threats were considered. A retrospective study from India, in which women recalled their last pregnancy, showed that victims of violence were significantly more likely to have experienced still birth or infant death [4], and Menezes et al [3] also reported an association with neonatal mortality. Recent studies of the association between antenatal common mental disorder and low birth weight are somewhat inconsistent in their findings. Reports of positive associations have tended to come from those living in conditions of absolute or relative socio-economic disadvantage. For example, Hoffman & Hatch [7] found a positive association, but only among women from a low social economic status in the USA. Two studies from South Asia [8,9] and now our current study, of a disadvantaged population in Brazil, have also reported an independent association between CMD and LBW.
What are the possible mechanisms linking violence and mental disorder with adverse obstetric outcomes? The association between violence and low birth weight has been attributed to factors such as prematurity (caused by trauma), substance abuse (such as smoking), low socioeconomic status (leading to hunger), maternal medical problems and maternal mental illness [1,31]. The same mediators might plausibly apply to the association with mental disorder. However, the associations we report were evident after adjustment for all these factors. It is possible that the associations were mediated by poor nutrition and self-care in mothers; for example, abusers limiting access to food and antenatal care [31]. There may also be more direct biological pathways; there is accumulating evidence in humans that the hypothalamopituitary axis is in overdrive in pregnant women subjected to psychosocial stress [34]. Cortisol crosses the placenta and high levels inhibit intrauterine growth [35,36]. The fact that violence in pregnancy was associated with SGA but not with preterm babies, whereas CMD was associated with preterm birth but not SGA suggests different mechanisms for the two exposures on the pathway to low birth weight.
Conclusion
The effect of violence on young mothers who have limited personal and social resources can be devastating. Appropriate interventions are urgently required to avoid or minimize the effects of violence on the health of the mothers and the babies. The implication for clinical practice is that all adolescent mothers should be routinely screened for the experience of violence, both lifetime and during pregnancy, as well as for mental disorder. Identification of the one should alert clinical teams to the possibility of the other, given the strong correlations that we and others have reported. Exposed mothers should be treated as 'at risk', supported intensively during the pregnancy and monitored for fetal growth. Antenatal mental disorder seems particularly likely to be associated with adverse obstetric factors (drinking and smoking, poor physical health and premature birth). At the social policy level, concerted efforts are need to combat gender-based violence not only as a human rights issue but as a major risk factor for poor maternal and newborn health; health professionals must actively engage in this advocacy.
Further research, preferably utilizing longitudinal designs, is needed to tease out the causal mechanisms linking violence with maternal and newborn outcomes. Such studies should examine not only the role of physical violence but also the influence of behaviors that cause harm without the use of physical force including neglect, humiliation and non-violent coerced sexual acts. The role of protective factors, such as social support, also merits investigation. | 2016-10-10T18:24:48.217Z | 2007-08-16T00:00:00.000 | {
"year": 2007,
"sha1": "3a904b844da294d7d09f43f6ffbb4c0ac19f4294",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-7-209",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f250cb9439a26e797938149c4268cf3464abad2",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221755595 | pes2o/s2orc | v3-fos-license | Epithelioid angiomyolipoma with tumor thrombus in IVC and right atrium
ABSTRACT Epithelioid angiomyolipoma is an uncommon subtype of renal angiomyolipoma associated with potentially malignant behavior and is considered a distinct entity by the World Health Organization classification of renal tumors. We present a case of an epithelioid variant of angiomyolipoma with extension into the renal vein, inferior vena cava reaching up to the right atrium. Pre-operatively, a diagnosis of renal cell carcinoma was considered based on imaging findings. Intra-operatively due to extensive adhesions, surgical resection was not performed and only tissue sampling was performed for histopathology. Microscopic examination revealed short fascicles of spindle cells and perivascular epithelioid cells. A differential diagnosis of renal cell carcinoma with sarcomatoid differentiation was considered. The immunohistochemical profile showed tumor cells that express Melan-A and smooth muscle actin, while they were negative for pan-cytokeratin, PAX8, CK7, CD117 and CD34. Therefore a diagnosis of epithelioid angiomyolipoma was rendered. The presence of intravascular thrombi on radiological investigation and carcinoma-like growth pattern on light microscopy may compound an erroneous diagnosis of renal cell carcinoma. Hence, it is prudent for the urologist to consider differential diagnosis other than renal cell carcinoma when confronted with a renal neoplasm presenting with intravascular thrombi. In these cases, a core biopsy should be planned pre-operatively and diagnosis should be made with aid of appropriate immunohistochemical markers.
INTRODUCTION
Angiomyolipoma (AML) is a rare hamartomatous tumor, which usually arises from the visceral organs, mainly in the kidney, lung and liver. AML is composed of an admixture of mature fat, smooth muscle and blood vessels. 1 Apart from the classical AMLs, the recent WHO classification describes several morphological variants. These include AML with epithelial cysts, oncocytoma-like AMLs, microscopic AMLs (microhamartoma) and intraglomerular lesions. Epithelioid AMLs (EAML) is described under a separate heading in the current WHO classification of tumor and is also known as PEComa of the kidney (perivascular epithelioid cell tumors). 2 Most cases of AMLs occur sporadically and only a few of them (<10%) are associated with tuberous sclerosis. 3 Although they are benign lesions, larger tumors particularly epithelioid Copyright: © 2020 The Authors. This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. . variant can, behave aggressively and may have extra-renal extension. 4 Extension of an AML into the renal veins, inferior vena cava (IVC) and heart is rare, unlike renal cell carcinomas. 5,6 We report a rare case of a 40-year-old woman who presented with a large AML of the right kidney, with extension into the renal vein and IVC, up to the right atrium.
CASE REPORT
A 40-year old female presented with complaints of abdominal pain, predominantly on the right side, which gradually worsened along with occasional episodes of vomiting over 2 months. She also complained of a vague lump in the abdomen. Physical examination revealed respiratory distress, hypotension and bilateral pedal edema. On abdominal examination, a 15x15x10 cm lump was bimanually palpable over the right hypochondrium, epigastrium and the right lumbar region, which had a hard consistency and did not move with respiration. Right renal angle fullness was present. The routine investigations showed anemia, thrombocytopenia, hyperkalemia, and hyponatremia. The chest X-ray confirmed the presence of right-sided pleural effusion. The computed urotomography depicted a large heterogeneously enhancing mass with internal non-enhancing cystic to necrotic areas, measuring 10.5x11.9x16.0 cm in the right kidney. The lesion was invading and expanding into the renal vein, intrahepatic, suprahepatic IVC and reached up to the right atrium ( Figure 1). Possibility of a right renal cell carcinoma (RCC) with tumor thrombus into renal vein and IVC with wall invasion was considered. In view of radiological diagnosis of RCC, the patient was planned for an exploratory laparotomy. On the exploratory laparotomy, dense adhesions were present between the tumor, colon and infrarenal as well as suprarenal IVC. No dissection plane was found between IVC and the tumor.
Therefore, in view of tumor unresectability, a biopsy was taken and the resection was suspended. The microscopic examination showed a varied morphology with short fascicular arrangement of spindle shaped tumor cells, with intervening thin walled vascular channels (Figure 2A), and nests and lobules of tumor cells with epithelioid morphology. These epithelioid tumor cells appeared to be more centered on dilated thin walled vascular channels ( Figure 2B). The tumor cells showed moderate nuclear pleomorphism with vesicular chromatin, prominent nucleoli and moderate to abundant amounts of clear to pale eosinophilic cytoplasm ( Figure 2C).
Scattered mitotic figures and a few interspersed multinucleated tumor cells were also seen. A possibility of RCC with sarcomatoid differentiation was considered. However, on immunohistochemistry, the tumor cells showed immunoreactivity for Melan A ( Figure 3A) and SMA ( Figure 3B) and were negative for PAX-8, Pan-Cytokeratin, Myogenin, CD117, CD34, CK7, PAX-8 and HMB-45.
In view of classical morphology and supporting immunohistochemistry findings, a diagnosis of EAML was considered. Preoperative radiological diagnostic consideration of RCC was excluded by appropriate immunohistochemical results. The patient was discharged after appropriate supportive care and counselling. Once the histopathological diagnosis was made, the case was discussed at a multi-disciplinary team meeting, and possible treatment modalities were discussed. Subsequently, a telephonic conversation with the patient's relative was made. However, the patient refused to come to the hospital for any treatment. Finally, the patient succumbed to her illness after 2 and half months.
DISCUSSION
PEComas represent mesenchymal tumors characterized by unique perivascular epithelioid cells expressing both melanocytic and myoid markers. They can occur at any anatomical sites, with particular predilection for visceral location such as kidney, liver and lung. PEComas of the kidney encompass classic AML and its histological variant (AMLs with epithelial cysts, oncocytoma-like AMLs, microscopic AMLs and intraglomerular lesions and epithelioid AMLs). 1 AMLs represent 0.3-3% of all renal tumors, with a female preponderance, due to hormonal influences. 7 They can range from microscopic lesions to very large tumors with extension into IVC and heart. 6 Most of these tumors are asymptomatic and detected incidentally. Larger tumors (>4 cm) can be symptomatic with flank pain and hematuria, or following retroperitoneal hemorrhage from intra-tumoral vessels. 8 The classic renal AML is a benign solid tumor which is typically composed of dysmorphic blood vessels, smooth muscle cells, and mature adipose tissue. These can show predominance of smooth muscle elements or adipose tissue, depending on which it can be labelled as leiomyoma-like or lipoma-like. 2 In a recent study by Çalışkan et al. 9 the authors described 28 cases of renal AMLs and classified in three categories: fat-rich (82.1%), fatpoor (14.3%) and epithelioid (3.6%). In the study by Aydin et al. 3 classic AMLs accounted for 76.8% cases while epithelioid variants, epithelial cysts and microscopic AMLs were noted in 7.7% cases, 6.7% cases and 10.8% cases, respectively. EAMLs can be aggressive, with local extension, distant metastasis, higher recurrence and mortality. 5,10 It was first described in 1997 by Eble et al. 11 and is composed of epithelioid cells, polygonal cells, varying degrees of nuclear atypia, with little or no fat cells. According to recent articles, EAMLs can specifically be categorized into typical and atypical types, and the atypical one possesses aggressive behavior. 12 The malignant potential of EAMLs is unequivocally demonstrated in literature. A few studies have analysed several clinico-pathologic factors for prognosticating EAML patients. Nese et al., 13 proposed a prognostic risk category for EAML cases by including five adverse parameters: EAMLs with TSC and/or coexisting classical AML, tumor size more than 7cm, presence of a carcinoma-like growth pattern, perirenal fat extension and/or renal vein involvement and necrosis. Tumors possessing 0-1 parameter, 2-3 parameters and 4 or more parameters were stratified into low risk, intermediate risk and high-risk categories, respectively. Among these groups, disease progression risks were 15%, 64% and 100%. Another study suggested presence of at least three out of four parameters (atypical epithelioid cells ≥ 70%, mitotic figures ≥2/10hpf, atypical mitotic figures and necrosis) to differentiate benign EAMLs with atypia from malignant EAMLs with atypia. 12 The present case had several features including extrarenal extension, renal vein involvement, mitotic figures and carcinoma-like growth pattern, suggesting malignant behavior.
Classic AMLs, particularly the larger ones, can rarely have involvement of the renal vein or IVC. This might be attributed to multifocal genesis of tumor, instead of direct vascular involvement. 14 The first case of renal AML with IVC was reported by Kutcher et al., 15 and the first case of renal AML with extension to heart was reported by Rothenberg et al. 16 Riviere et al., 6 in their review, found that among patients with AML, 44 had IVC extension and most of them had large tumors (>4 cm) at presentation and more than 67% of patients were symptomatic. The median age of presentation was 46.6 years, and only seven patients had right atrial extension and all of them were female.
The current diagnostic methods include ultrasound, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Because of its fat component, the preferred diagnostic method for AML is CT. 17 The fat content appears as hypodensity on CT and as a hyperechoic signal on sonography. 18 Approximately 5% of AMLs lack fat and, therefore, cannot be differentiated from RCCs. Due to their peculiar characteristics, EAMLs resemble conventional RCC either histologically and radiologically, and have similar cytologic features on fine-needle aspiration. 19 EAMLs and RCC both frequently present with vague flank pain, a palpable mass or hematuria. 20 The definitive method for the differential diagnosis between EAMLs and RCC is based on immunohistochemical markers. RCCs are immunoreactive for PAX-8, cytokeratin and EMA, which are negative in EAMLs. By contrast, EAMLs show co-expression of melanocytic markers (HMB-45 and melan-A) and myoid markers (SMA, MSA, calponin and/or desmin), which are not found in RCC. 12 However, one of the melanocytic marker, HMB-45, was negative in the present case. Although this is an infrequent finding, it is well reported in the literature. Aydin et al . , reported HMB-45 and Melan A positivity in 92% and 80% cases of EAMLs, respectively. In their study, expression of any one of these markers were noted in 100% cases. Isolated cases showing loss of HMB-45 expression in AML are also reported by Hohensee et al. 21 and Lin et al. 22 Hohensee et al. 21 in their case, reported absence of both melanocytic markers on IHC in a case of renal EAML. Diagnosis was confirmed by presence of premelanosomes on electron microscopy examination. The authors ascribed lack of IHC expression for these antibodies to aberrant antigen expression in tumor tissue.
Currently, there is no standard treatment for renal AML. Annual imaging examinations are proposed for patients with sporadic tumors measuring <4 cm. However, for large tumors (>4 cm), the majority of previous studies recommend surgical treatment. For tumor thrombus involving renal vessels, the inferior vena cava, and even the right atrium, a thrombectomy is reasonable. In contrast to conventional RCC, EAMLs are sensitive to chemotherapy because they are part of the perivascular epithelioid cell tumor group. EAMLs have been reported to respond to doxorubicin. 10 In conclusion, EAML is an uncommon variant of AML and is a close mimicker of renal cell carcinoma, particularly when there is intravascular spread of tumor cells in EAML. Pathologically, a carcinoma-like growth pattern in absence of adipocytic components may further add to erroneous diagnosis. Though rare, it is prudent for the treating surgeon to consider differential diagnosis other than renal cell carcinoma when confronted with a renal neoplasm presenting with intravascular thrombi. In these cases, a core biopsy should be planned pre-operatively and diagnosis should be made with aid of appropriate immunohistochemical markers. | 2020-09-03T09:03:14.202Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "711fccc8d631c2d8318966b792da2c6ea538ab5a",
"oa_license": "CCBY",
"oa_url": "https://www.autopsyandcasereports.org/article/10.4322/acr.2020.190/pdf/autopsy-10-4-e2020190.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bde4bbcab032ac80451dcc6d087b43d5a0b99415",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236346784 | pes2o/s2orc | v3-fos-license | THE EFFECT OF RAMANIA LEAF (Bouea macrophylla Griff) EXTRACT GEL ON COLLAGEN FIBERS DENSITY IN INCISIONAL WOUND OF MALE WISTAR RATS
Background: Ramania leaf (Bouea macrophylla Griff) extract gel has secondary metabolites in the form of flavonoids, steroids, phenols and terpenoids which have a role as antioxidant. They will protect the body from excessive production of reactive oxygen species (ROS) by increasing endogenous antioxidants SOD, CAT and GPX, so that wound healing will not be inhibited and the process of collagen synthesis can run smoothly. Objective: To analyze the effect of ramania leaf extract gel that was applied topically with 5%, 10% and 15% concentration on collagen fibers density in incisional wound of male Wistar rats (Rattus norvegicus) on the 7th and the 14th day. Method: This research is a pure experimental study with a posttest-only control group design, using 24 rats which were divided into 4 groups: the treatment groups given ramania leaf extract gel of 5%, 10%, 15%, and the control group given placebo gel. The application of the extract gel was done once within 24 hours. The collagen level was measured with a spectrophotometer on the 7th and the 14th day. Results: Two-Way ANOVA test results on the 7th and the 14th day of each group showed a significant difference with p=0.000 (p<0.05). The Bonferroni Posthoc Test showed a significant difference with p<0.05 between the placebo gel group and the groups of ramania leaf extract gel of 5%, 10%, 15% on the 7th and the 14th day. Conclusion: There is an effect of ramania leaf extract gel on collagen fibers density with the most effective concentration of 15%.
INTRODUCTION
Human is an active being with many activities that can cause injury, both intentionally and unintentionally. The prevalence of wounds is increasing every year. A research by The American Professional Wound Care Association shows that the incidence of wounds caused by surgery and trauma are up to 48% in the world. 1 The incidence of injuries in Indonesia has increased from year to year. According to the 2018 Basic Health Research (Riskesdas) report, wounds due to oral surgery reached 8.46% and 0.013% were due to tooth extraction. The increased incidence of injuries will certainly become a problem as acute wounds will develop into chronic wounds when there is an extension in the process of healing. 2,3 Wound healing is a series of events that occur from the moment of an injury happened and continue until the wound closure, the importance of the body in completing this process is to prevent infection and repair damaged areas. The process of wound healing consists of three phases, namely inflammation, proliferation, and remodeling. The component that plays an important role in the remodeling phase is collagen. Collagen is synthesized by fibroblasts and reaches its peak on the 7th day, and starts to stabilize and be organized around the 14th day. 4 Collagen is the main protein that makes up the extracellular matrix component and is the protein most commonly found in human body. 4 Wound healing aims to restore the function and shape of tissues to normal condition with minimal complications. Efforts to heal wounds can be done with chemical drugs, but those drugs have side effects and can cause resistance. 5 Another alternative that can be chosen is the use of herbal medicines. Herbal medicine is chosen as a cheaper solution and also has minimal side effects for the body. 6 There are various types of treatment that can be done to heal wounds, one of them is herbal medicine that is derived from plants and used as adjuvant therapy. One of the said plants that is found in South Kalimantan is ramania (Bouea macrophylla Griff), which belongs to the genus Bouea and the family Anacardiaceae. Banjarese people have food consumption patterns that tends to be high in fat which can be one of the risk factors for atherosclerosis that can affect the process of wound healing. 7 It is because ramania leaf has secondary metabolites such as flavonoids, steroids, phenols and terpenoids. 6 One of the highest contents of ramania is flavonoids. Flavonoids have function as antioxidants, antibiotics, antivirals, anti-allergics, anticancer, antimicrobials, and also antiinflammatory. Flavonoids as antioxidants will protect the body from excessive production of ROS by increasing endogenous antioxidants SOD, CAT and GPX, so that ROS can be suppressed and wound healing will not be inhibited and the process of collagen synthesis can run smoothly. 8,9 According to research that was conducted by Rahman et al (2017), ramania leaf contains 167.06 μg/mg of flavonoid compounds. 10 A further research by Fitri et al (2018) found that the IC50 value of ramania leaf extract was 35.808 μg/mg. 6 Based on another research by Risa (2018), mango leaf extract with a concentration of 15% can help the process of wound healing to run faster which makes the researcher want to use ramania leaf extract as an adjuvant therapy for wound healing in the form of gel preparations with a concentration of 5%, 10%, and 15%. The selection of gel preparations aims to make the absorption process occur faster and to help releasing the active substances on the skin. 11
MATERIALS AND METHODS
This research has passed the ethics feasibility test published by the Ethics Commission of the Faculty of Dentistry, Universitas Lambung Mangkurat, Banjarmasin through certificate No.071/KEPKG-FKGULM/EC/I/2020. This research is a pure experimental design with a posttest-only with control group design. Twenty-four Wistar rats (Rattus novergicus) were used as samples for this research. The inclusion criteria for the sample were male Wistar rats weighing 200-250 g, aged 2-3 months, moving actively and having a good appetite.
The procedure for the research was started with the making of Ramania leaf extract using maceration method. Ramania leaves were cleaned by washing them using clean running water, then dried in an oven at 50 o C for 4 hours. After that, the leaves were made into dried simplicia powder using a blender. The obtained simplicia powder was sifted with a sieve. The dried simplicia powder was weighed as much as 100 g then macerated.
The maceration was done for 3 days without any exposure to sunlight. Around 450 mL of the simplicia ramania leaves were then soaked with ethanol 95%. Afterwards, the maceration result was concentrated with a rotary evaporator at 50 o C and evaporated again with a water bath to remove the remaining solvent until a 100% thick extract was obtained. The Ramania leaf extract was then mixed with a gel base to make the gel. Five miligrams of ramania leaf extract was mixed with 95gr of gel base, 10mg of ramania leaf extract was mixed with 90gr of gel base, and 15mg ramania leaf extract was mixed with 90gr of gel base, which then resulted in the concentrations of 5%, 10% and 15% of ramania leaf extract gel.
The preparation of animal testing was done for 7 days. The rats were given standard food and drink in a laboratory setting. The rats were anesthetized intraperitoneally with a mixture of ketamine 40-100 mg/Kg BW and a dose of xylazine 5-10 mg/Kg BW. Before that, the fur on the back of the rat was shaved with a length of 5 cm and height of 3 cm, then it was disinfected with alcohol. The incision was made on the back of the rat with a length of 2 cm and the depth up to subcutaneous tissue using scalpel and blade number 15. The blood that came out was then cleaned with cotton swab that was given a solution of NaCl. The rats were divided into 4 groups, namely the control group that was given placebo gel and the treatment groups that were given the concentrations of 5%, 10%, and 15% of Ramania leaf extract gel. Each group consisted of 3 rats. The extract gel was applied 1 time a day in a oneway motion and the cotton bud applicator position was rotated, the wound was then covered with gauze.
On the 7th and the 14th day, all of the rats in each group were up for euthanasia by intraperitoneal local anaesthetic mixed of ketamine 40-100 mg/Kg BW and xylazine dose 5-10 mg/Kg BW and waited until the rat became unconscious.
The retrieval of tissue in rat was carried out by biopsy, using an excisional biopsy technique with scalpels and fine surgical scissors. The excised area was the back of the rats with a length of 1 cm, 1 cm, and the depth up to subcutaneous tissue. The tissue that had been biopsied per treatment was taken for biochemical analysis to estimate the amount of hydroxyproline by making homogenates. The sample tissue was dried in an oven at 60-70 o C for 12-18 hours. Then, the tissue was hydrolyzed with acid for 6 hours, the result of hydrolysis was centrifuged at 3000 rpm for 15 minutes and 1 ml of the collected supernatant was transferred into the test tube. Supernatants were lyophilized using nitrogen gas flow. After that, the determination of the hydroxyproline content from the sample tissue was done with the Stegemann and Stalder method (1967). Hydrolysate was mixed with buffer of chloramine-T at 4 o C, 20 minutes later, 1 mL of Ehrlich's reagent was added to obtain chromophore compounds i.e. the color of the solution turned pink and no schlieren (transparent layer) was formed in the solution until the color change was stable for 30 minutes.
The absorbance of the solution was then measured at a wavelength of 550 nm and the level of hydroxyproline in the sample was extrapolated with a standard hydroxyproline curve that was obtained through a UV-VIS spectrophotometer, using the equation y=ax + b, y is the absorbance and x is the value of the content. The results were then analyzed.
RESULTS
The statistical results showed the data of all groups were normally distributed and the variance of data was homogeneous. Two-Way Anova test results on the 7th and the 14th day of each group showed that there was a significant difference with p=0.000 (p<0.05). The Bonferroni Post-Hoc Test showed that there was a significant difference with p<0.05 between the placebo gel group and the concentrations of 5%, 10% and 15% of ramania leaf extract gel groups.
The results of the average density of collagen fibers in the incision wound of the male wistar rats can be seen in table 1. The graph of the average collagen fibers density of incision in male wistar rats on the 7th and the 14th day can be seen in In figure 1, it can be concluded that the application of extract gel with higher concentration will be followed by an increase of collagen fibers density in male Wistar rats. High density of collagen fibers shows better antioxidant activity in Wistar rats. The highest density of collagen fibers in a consecutive order were the concentration of 15%, 10%, 5% of ramania leaf extract, and placebo.
DISCUSSION
The statistical test results showed that the average density of collagen fibers that was given topically to the incisional wound of the Wistar rats had significant differences between the concentrations of 5%, 10%, 15% of ramania leaf extract gel and placebo gel. This significant difference is due to the presence of flavonoids in ramania leaf. According to the research by Rahman et al (2017), ramania leaf contains 167.06 μg/mg of flavonoid compounds. 10 Flavonoids have function as antioxidants to help the process of wound healing. The function of flavonoids as exogenous antioxidants is by capturing free radicals and activating Nrf2 in the inflammatory phase, so as to maintain a balance between oxidants and antioxidants in the body. The increasing of antioxidant enzymes SOD, CAT, and GPX will prevent the formation of excessive ROS which can disrupt the function of communication between cells so that the process of wound healing can run smoothly. 12 Flavonoids also have immunomodulatory capabilities. Based on the research by Suharto et al (2019) who used ginger extract (Zingiber Officinale Roscoe), flavonoids can help lymphocyte proliferation and IL-2 production which will stimulate the proliferation phase and the differentiation of T cells. The differentiated T cells will turn into Th1 cells and secrete IFN-γ which has the potential to activate macrophages. Active macrophages will release several growth factors, namely PDGF, FGF, TGF-α, TGF-β and EGF which are responsible in stimulating the proliferation and the migration of fibroblasts, also stimulating the production of extracellular matrix which is important in the process of wound healing. 13 In addition to growth factors, flavonoids will also produce cytokines such as IL-1, IL-4 and IL-8 which play a role in the processes of fibroblasts chemotaxis and keratinocytes, and the activation of fibroblast proliferation and collagen synthesis. 3,14 The statistical test results of the effect of Ramania leaf extract gel on collagen fibers density of incisional wound in male Wistar rats on the 7th day showed an increase in collagen levels. This research was in accordance with the research by Sucita et al (2019) which proved that rats with incisional wounds given a sappan wood extract (Caselpinia sappan L.) at the concentration of 6.5% can topically increase collagen fibers density because of the administration of sappan wood extracts which contain flavonoids as the highest secondary metabolite compound. 15 At this stage, the wound is still in the proliferation phase which is marked by an increasing number of fibroblasts. The high number of fibroblasts is influenced by the high number of macrophages. This is in line with the research by Suharto et al (2019) which states that flavonoids are also the compound that plays a role to activate macrophages. When macrophages increase, TGF-β secretion will also increase. TGF-ß has a function to trigger the proliferation and migration of fibroblasts. Fibroblasts proliferation indicates that granulation tissue begins to form through a mechanism that will produce three-dimensional extracurricular tissue in connective tissue. 16 Fibroblasts with matrix metalloproteinases (MMP) as the main processor components of the extracellular matrix will capture the fibrin matrix and convert it into glycosaminoglycan (GAG), then the extracellular matrix will be replaced by another fibroblast product namely type III collagen. 17 Type III collagen is a type of collagen that is commonly found during the initial process of wound repair and can reach the maximum amount on the 5th to the 7th day after the wound. 18 The statistical test results on the 14th day showed a decrease in collagen compared to the 7th day. This is in accordance with the research by Yuza F et al (2014) about the process of wound healing after tooth extraction in guinea pigs using 90% of Aloe Vera (Aloe barbadensis Miller) extract which has the same content as ramania leaf which is flavonoids. The results of the research showed an increase in collagen fibers on the 7th day and a decrease on the 14th day. On the 14th day, the decrease in collagen fibers density in the control and treatment group was because at this stage the process of wound healing was in the remodelling phase. 19 In this phase, there is simultaneous synthesis and degradation of collagen, so that the amount of collagen seen is not as much as in the previous phase. The final amount of collagen depends not only on collagen synthesis, but also its degradation. The balance between collagen synthesis and tissue degradation forms a normal process of wound healing. This balance occurs between them until 3 weeks after the injury before the stability finally occurs. 17,20 In this phase, type III collagen will be replaced by type I collagen which forms a band and has stronger tensile strength and density in new tissue. 21 When the collagen fibers begin to form, the tensile strength of the wound will also slowly return. At the end of this remodelling phase, skin injuries are only able to withstand stretches of approximately 80% of their ability than normal skin. 22 The results showed that Ramania leaf extract gel of 15% gave a better effect than Ramania leaf extract gel of 5%, 10%, and placebo gel. The results of this research are supported by the research that was conducted by Dewantari and Sugihartini (2015) which stated that the higher number of extract concentration in the gel preparation would also increase the activity of wound healing more. 23 This showed that collagen fibers density with high concentrations would result in higher antioxidant activity. 24 This antioxidant activity is caused by the components that are contained in Ramania leaf, namely flavonoids, phenols, steroids, and terpenoids in Ramania leaf extract gel to support one another, making it more effective to produce antioxidants. 25 Based on this research, it can be concluded that there is an effect of ramania leaf extract gel (Bouea macrophylla Griff) on collagen fibers density with the most effective concentration of 15%. | 2021-07-27T00:05:28.708Z | 2021-05-27T00:00:00.000 | {
"year": 2021,
"sha1": "234abafc681c155d7d7ede33ff712ed4fdb7afad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.20527/dentino.v6i1.10648",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e4402ca9ff4dda1de4835ce48dbac8335d156667",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
8752329 | pes2o/s2orc | v3-fos-license | Profiles of the auditory epithelia related microRNA expression in neonatal and adult rats
Background The impact of miRNA differential expression on the auditory epithelium stem cell development in postnatal rats is not clear. The present study was designed to analyze miRNA expression in the organ of Corti of neonatal and adult rats. Methods The cochleae of newborn (P0) and adult (P30) Sprague-Dawley rats were dissected in cold PBS to collect the sensory epithelia. Small RNAs were extracted using the mirVana RNA Isolation kit. Then, miRNA expression profiling was performed with RNAs from three newborns and three adult rats utilizing the TaqMan Array Rodent MicroRNA Panel. Results Eighteen miRNAs were found be differentially expressed, 16 were unregulated in mature cochleae with the fold changes ranging from 17 to 600 folds. The expression levels of two miRNAs were reduced in the mature rat cochleae. GO analysis and signaling pathway analysis revealed the potential involvement of the miRNAs in the regulation of Wnt and TGF-β signaling pathways in hair cell development. Conclusions Our results provided novel insights into the functional significance of miRNAs in the basilar membrane cells development, and revealed the potential importance of miRNAs in the hair cell by regulation of Wnt and TGF-β signaling.
Background
The inner ear, located in the temporal bone, contains the cochlea responsible for hearing and the vestibule responsible for balance. In the cochlea, the organ of Corti is a specialized structure that responds to fluid-borne vibrations. The organ of Corti is a complex organ that comprises a highly-ordered cellular mosaic of sensory hair cells (HCs) and non-sensory supporting cells (SCs). The generation of new HCs occurs throughout life in the auditory and vestibular sensory receptor of fishes and amphibians [1,2]. In mammals, embryonic HC and SC proliferation within the sensory epithelium culminates between embryonic day 13 (E13) and E15, but stops after birth [3]. It has been shown that acutely dissociated cells from the newborn rat or young rat organ of Corti can develop into otospheres consisting of 98% nestin cells when plated on a non-adherent substratum in the presence of either epidermal growth factors or fibroblast growth factors [4,5]. Li and colleagues have shown that the adult utricular sensory epithelium contains cells that display the biological features of stem cells including self-renewal, sphere formation and capability of differentiating into hair cell-like cells [6]. However, the replacement of lost hair cells does not occur spontaneously in pathological conditions such as age-related cochlear degeneration. A number of genes that affect various aspects of inner ear development have been identified and include transcription factors, morphogens, growth factors, receptors, and so on. microRNAs (miRNAs) were discovered by Lee and colleague in Caenorhabditis elegans in 1993 as novel molecules that play an important role in gene expression regulation [7]. miRNAs are small noncoding RNA molecules (approximately 22 nucleotides) that regulate posttranscriptional gene expression by relatively nonspecific binding to the 3'-untranslated region of mRNA [8]. A single miRNA may regulate several genes because of the sequence similarity. It has been proposed that over one third of all protein-encoding genes are under translational control by miRNA [9]. miRNAs are involved in a variety of cellular processes, including cellular differentiation, proliferation and apoptosis [10]. miRNAs play an essential role in inner ear development [11]. A recent study using conditionally knocked out Dicer only in the inner ear, SE hair and SCs after their normal differentiation from progenitor cells revealed the importance of miRNAs in inner ear development and function in vertebrates [12]. Using an in silico prediction model that integrates miRNAs, mRNA and protein expression, Elkan-Miller and co-workers discovered the expression of 157 miRNAs in the inner ear sensory epithelium, with 53 miRNAs differentially expressed between the cochlea and the vestibule. Six miRNA families appear to be functionally important in the inner ear [13].
Zhang and colleagues [14] identified the miRNAs involved in degeneration of the organ of Corti during age-related hearing loss. They showed that 111 and 71 miRNAs exhibited differential expression in the C57 and CBA mice aged from postnatal day 21 to 16 months, respectively, and that downregulated miRNAs substantially outnumbered upregulated miRNAs during aging. However, comparisons of miRNA differential expression in the organ of Corti between newborn and adult rats, representing the early development of the inner ear sensory epithelium, have not yet been investigated. Therefore, in this study, we characterized the miRNAs expression profile in the auditory epithelia in both newborn and adult rats in order to examine the patterns and potential roles of miRNA differential expression in the early development of the inner ear sensory epithelium. The results showed that 16 differentially expressed miRNAs were identified. GO (Gene Ontology) term analysis revealed the importance of Wnt and transforming growth factor (TGF)-β signaling in the hair cell development. Understanding the miRNA and gene interaction network shed light on their roles on the development of normal and impaired hearing, and the mechanisms leading towards deafness.
Animal
Neonatal (P0) and adult (P30) Sprague-Dawley (SD) rats were approved by the Institutional Animal Care and Use Committees of Chinese PLA General Hospital.
RNA isolation
The cochleae of new born (P0) and adult (P30) SD rats were dissected in cold PBS (10 mM Na 2 HPO 4 , , 1.7 mM KH 2 PO 4 ,137 mM NaCl, 2.7 mM KCl, pH 7.4) to collect the sensory epithelia. The collected tissues were stored in RNAlater (Ambion, Austin, TX, USA) until use. Small RNAs (<200 nucleotides) were extracted using the mirVana RNA Isolation kit (Ambion, Austin, TX, USA) according to the manufacture's instruction. The quality and quantity of the RNA preparations were determined using a 2100 Agilent BioAnalyzer and a NanoDrop ND-1000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA).
Microarray analyses
miRNA expression profiling was performed with RNAs from three newborns and three adult rats utilizing the TaqMan Array Rodent MicroRNA Panel (Applied Biosystems, Foster City, CA, USA) using 50 ng of RNA per port for a total of 400 ng. This array contains 365 miRNA targets as well as endogenous controls. Normalization was performed with the small nuclear RNAs (snRNAs) U44 and U48. These snRNAs are stably-expressed reference genes suitable for normalization of miRNAs. The qRT-PCR for the assessment of gene expression levels was performed using an ABI Prism 7900HT Sequence detection system (Applied Biosystems, Foster City, CA, USA).
Two class differential
We applied the random variance model (RVM) t-test to filter the differentially expressed miRNAs for the newborn and adult groups because the RVM t-test can raise degrees of freedom effectively in the cases of small samples. After the significant analysis and FDR analysis, we selected the differentially expressed genes according to the P-value threshold [15].
GO analysis
GO analysis was applied to analyze the main function of the differential expression genes according to the Gene Ontology which is the key functional classification of National Center for Biotechnology Information (NCBI) [16]. Generally, Fisher's exact test and χ 2 test were used to classify the GO category, and the false discovery rate (FDR) [17] was calculated to correct the P-value, the smaller the FDR, the smaller the error in judging the P-value. The FDR was defined as: where N k refers to the number of Fisher's test P-values less than χ 2 test P-values. We computed the P-values for the GOs of all the differentially expressed genes. This enrichment analysis provides a measure of the significance of the function: as the enrichment increases, the corresponding function is more specific, which enabled us to identify those GOs with more concrete function description in the experiment. Within the significant category, the enrichment Re was given by: where n f is the number of differential genes within the particular category, n is the total number of genes within the same category, N f is the number of differential genes in the entire microarray, and N is the total number of genes in the microarray [18].
Pathway analysis
Pathway analysis was used to identify the significant pathway of the differentially expressed genes according to the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta and Reatome. Again, we used the Fisher's exact test and χ 2 test to select significant pathways. The threshold of significance was defined by P-value and FDR. The enrichment Re was calculated using the equation described above [19][20][21]. The relationship of the miRNAs and genes was counted by their differential expression values, and according to the interactions of miRNA and genes in Sanger MicroRNA database to build the MicroRNA-Gene-Network. The adjacency matrix of MicroRNA and genes A = [ai,j] is made by the attribute relationships among genes and MicroRNA, and [ai,j] represents the relation between the weight of gene i and MicroRNA j. In the MicroRNA-Gene-Network, circles represent genes and squares represent miRNAs, and their relationship is represented by one edge. The center of the network is represented by degree. Degree means the contribution made by one miRNA to the genes around or the contribution made by one gene to the miRNAs around. The key miRNAs and genes in the network always have the largest degrees [22,23].
MicroRNA-GO-network
The miRNA-GO-network is built according to the relationship between significant GOs and genes and the relationships among miRNAs and genes. The adjacency matrix of MicroRNA and genes A = [ai, j] is made by the attribute relationships among GOs and miRNAs, and [ai,j] represents the relation between the weight of GO i and MicroRNA j. In the MicroRNA-Gene-Network, circles represent genes and squares represent miRNAs, and their relationship is represented by one edge. The center of the network is represented by degree. Degree means the contribution one miRNA to the GOs around or the contribution one GO to the MicroRNAs around.
The key miRNA and gene in the network always have the largest degrees.
miRNA expression profile analysis
To gain insights into the role of miRNAs that may be associated with the proliferative ability of cochlear cells during maturation of the cochlea, we examined the global expression pattern of mature miRNA using TaqMan® Rodent MicroRNA Arrays V2.0 (Applied Biosystems, Foster City, CA, USA). A total of 18 miRNAs exhibited expression changes in adult cochleae (Table 1) when compared with the newborn cochleae. Among these miRNAs, 16 were unregulated in mature cochleae with the fold changes ranging from 17 to 600 folds. The expression levels of two miRNAs (rno-miR-29c and rno-miR-29a) were reduced in the mature rat cochleae. This observation suggests miR-NAs are involved in development process of cochleae.
Microarray-based GO analysis
According to the threshold of GOs significantly regulated by miRNAs, the P-value and FDR was < 0.001 and < 0.05, respectively. The high-enrichment GOs targeted by overexpressed miRNAs included negative regulation of epithelial cell differentiation, common-partner SMAD protein phosphorylation, mesenchymal-epithelial cell signaling, regulation of TGF-β2 ( Figure 1). In contrast, significant GOs corresponding to downregulated miRNAs included protein heterotrimerization, negative regulation of phosphatidylinositol biosynthetic process and regulation of mitosis. Among these cellular processes, the maximum-enriched-GO relating to TGF-β2 and SMAD signaling suggested that they might have an important role in the proliferation potient of SE. Additionally, the miRNA-mRNA network analysis integrated these miRNAs and GOs by outlining the interactions of miRNA and GOrelated genes (Figure 2). The miRNA-mRNA regulatory networks were established ( Figure 3) and distinguished the putative target mRNAs between overexpressed and under-expressed miRNAs. Seven overexpressed miRNAs (miR-20a, miR-199a-5p, miR-199, miR-323, miR-301a, miR-301b and miR-130b) showed the most target mRNAs. The miRNAs including miR-301a, miR-301b and miR-130b regulated some important genes, including TGF-βR1, TGF-βR2, Smad2, Smad4 and Smad5 and, therefore, might be of great importance to the activation of the organ of Corti.
Signaling pathways regulated by differentially expressed miRNAs
Functional analysis of miRNAs by KEGG revealed that 19 signal transduction pathways were upregulated ( Figure 4A) and 14 were downregulated ( Figure 4B). The upregulated signaling pathways including Wnt, TGF-β and mitogenactivated protein kinase (MAPK) have been shown to participate in the activation of stem cells. A wide variety of cellular processes, including regulation of actin cytoskeleton, MAPK and gonadotropin releasing hormone (GnRH) signaling pathway, also featured the functions of significant signaling pathways.
Discussion miRNAs have become an area of intense study because of their involvement in human diseases. The lack of inner ear hair cell proliferation contributes to hearing loss in the aging population. In the present study, we compared the miRNA expression profiles between the newborn and adult cochlear sensory epithelia. Our results revealed that several miRNAs were differentially expressed in these two cochleae age groups. The difference in miRNA expression may contribute to the loss of proliferation of sensory cells in the organ of Corti in adult cochleae. These results provide novel insights into the functional significance of miRNAs in the basilar membrane cells development. Given that embryonic HC and SC proliferation within the sensory epithelia culminates between embryonic day 13 (E13) and E15, but stops after birth in mammalian ears, the fundamental mechanisms behind the embryonic HC and SC proliferation potential are probably lost after birth. One of the possibilities is that coordinated and tightly controlled gene expression programs orchestrate the developmental process. miRNAs as key regulators might play important roles during this phenotypic transition, further adding another layer of complexity to the regulatory network for basilar membrane proliferation. Our miRNAs microarray data suggested that the expression profile of miRNAs in the rat inner ear appears to be well established by P0, consistent with the fact that early inner ear development and cell fate specification mostly occur embryonically [24]. Our profiling data identified two distinct expression patterns of miRNAs between newborn and adult rat basilar membrane. Such differences appeared to be associated with basilar membrane proliferation. The miRNAs expression profiling identified 18 differentially expressed miRNAs with 16 miRNAs being increased and two miRNAs decreased in the mature rat compared to that of the newborn rat. miRNAs that increased mostly in the adult basilar membrane includes miR-296, mi-130b and miR-183 and that those that decreased mostly include miR-29c and miR-29a. MiR-296 has been demonstrated to modulate the pluripotency of embryonic stem cells (ESCs) by repressing the expression of Oct4, Sox2, and Nanog [25]. In vertebrates, the expression domain of conserved miRNA-183 (miR-183) family members appears to be restricted to ciliated neurosensory epithelial cells and certain cranial and spinal ganglia [26,27]. In zebrafish the miRNAs are detected in the eye, nasal epithelium, and sensory hair cells of the ear and neuromasts [26], and injection of miR-183 and miR-200 family members into zebrafish embryos have been demonstrated to impact development and affect neuromast migration [28]. Additionally, expression of miR-183 family members in mouse eye and aural sensory hair cells of the ear has been previously demonstrated [29].
Compared with the differential expression pattern of miRNAs between newborn (P0) and adult rats (P30), Zhang and colleagues [14] reported different or even opposite patterns when they questioned which miRNAs are involved in age-related (from P21 to 16 months) degeneration of the organ of Corti, the auditory sensory epithelium that transduces mechanical stimuli to electrical activity in the inner ear. They showed that 111 and 71 miRNAs exhibited differential expression in the C57 and CBA mice, respectively, and that downregulated miRNAs substantially outnumbered upregulated miRNAs during aging. miRNAs that had approximately 2-fold upregulation included members of miR-29 family and miR-34 family and that were downregulated by about 2-fold were members of the miR-181 family and miR-183 family. The inconsistency between Zhang's report and our study suggested that miRNA patterns in the organ of Corti change with aging and that miRNAs such as miR-183 and miR29 play different roles in the development of organ of Corti in newborn, younger and older animals.
The present GO analysis and signaling pathway analysis showed that the high-enrichment GOs targeted by overexpressed miRNAs in young adult (P30) compared with newborn rats (P0) included negative regulation of epithelial cell differentiation, common-partner SMAD protein phosphorylation, mesenchymal-epithelial cell signaling, regulation of TGF-β2 production. In contrast, significant GOs corresponding to downregulated miRNAs included protein heterotrimerization, negative regulation of phosphatidylinositol biosynthetic process and regulation of mitosis. Among these cellular processes, the maximumenriched-GO relating to TGF-β2 and SMAD signaling suggested that they have an important role in the proliferation potient of SE. However, Zhang and colleagues [14] reported that miRNAs upregulated in age-related mice (from P21 to 16 months) are known regulators of pro-apoptotic pathways. In contrast, downregulated miR-NAs are known to be important for proliferation and differentiation, respectively. The author concluded that the shift of miRNA expression favoring apoptosis occurred earlier than detectable hearing threshold elevation and hair cell loss. The authors suggested that changes in miRNA expression precede morphological and functional changes, and that upregulation of pro-apoptotic miRNAs and downregulation of miRNAs promoting proliferation and differentiation are both involved in age-related degeneration of the organ of Corti. The inconsistency in functions between Zhang's report and our study can be explained by the different roles of miRNAs in the development of the organ of Corti in newborn, younger and older animals.
Establishment of primitive streak cells upon differentiation of ESCs depends on the presence of active Wnt and TGF-β/nodal/activin signaling, which recapitulates early events that led to germ-layer induction in the mammalian embryo [30]. In our study, we found that miRNAs that inhibit Wnt and TGF-β signaling pathway were decreased in adult rats. Recently, Oshima and colleagues generated mechanosensitive sensory hair celllike cells from embryonic and induced pluripotent stem cells by the combination of Wnt inhibitor Dkk1, selective inhibitor of Smad3 (SIS3) which interferes with TGF-β signaling, and insulin-like growth factor 1 (IGF-1) [31]. Consistent with previous studies, our findings revealed the potential importance of miRNAs in the hair cell by regulation of Wnt and TGF-β signaling.
Conclusions
Our results provided novel insights into the functional significance of miRNAs in the basilar membrane cells development, and revealed the potential importance of miRNAs in the hair cell by regulation of Wnt and TGF-β signaling. | 2016-05-17T12:50:21.794Z | 2014-09-06T00:00:00.000 | {
"year": 2014,
"sha1": "70131363f438e66ffa80360ba2e916f8a4fccd26",
"oa_license": "CCBY",
"oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-014-0048-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91b8e3340d849ba2f7e5f9ac7c751d852290ac57",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52238839 | pes2o/s2orc | v3-fos-license | Isopropyl Octanoate Synthesis Catalyzed by Layered Zinc n-Octanoate
Isopropyl octanoate is an ester used in the formulation of cosmetics, foods, medicines, and other products. The peculiar thermodynamic and kinetic factors, allied with the demand for the applications mentioned above, justify the development of an optimized process for its synthesis. In the present article, the esterification of octanoic acid with isopropanol while using zinc n-octanoate as catalyst is reported. A factorial design was employed where variables such as molar ratio (alcohol:acid), percentage of catalyst (in relation to the acid mass) and temperature were investigated. It was observed that zinc octanoate showed promising catalytic activity, especially at 165 °C, with molar ratios of 8:1 and 6:1, and with 7 and 10% of catalyst, where conversions up to 75% were observed. Optimized conditions for the production of isopropyl octanoate were determined to be 165 °C, 6:1 molar ratio and 5.97% of catalyst.
Introduction
2][3][4] Among such esters, isopropyl octanoate, derived from the process of isopropanol esterification with octanoic acid in the presence of an acid catalyst (typically sulfuric or hydrochloric acid) in homogeneous medium, requires a complex process to remove the products and neutralize the acid catalyst. 5Other solid acid catalysts have also been reported, but due to high acidity, parallel reactions are frequently observed. 6Esterification reactions with branched-chain alcohols generally have low yields, due to low reactivity of the radical derived from the alcohol. 7espite these difficulties, isopropyl octanoate is required for the formulation of cosmetics, toiletries, foods, medicines, and other products. 8,9herefore, it is highly relevant to find more efficient methods for esterification of octanoic acid, a saturated linear-chain carboxylic acid with affordable cost and essential to obtain isopropyl octanoate.
A class of materials used in heterogeneous catalysis that have received great attention in scientific and industrial fields is layered compounds, with their structures based on stacked layers along the basal axis, separated or not by cations and anions. 10In this context, the layered metal carboxylates present strong potential for application as catalysts to synthesize various chemical compounds.Laurates of transition metals and benzoates of alkaline earth metals have been employed successfully for the production of methyl laurate and methyl benzoate, respectively. 11,12esearch into new catalytic systems is facilitated by the use of alternative experimental designs, such as the Box-Behnken design (BBD), which is a factorial planning with three levels consisting of three interlocked 2 2 designs and a central point, in order to reduce the number of experiments that are required during the investigation of complex matrices. 13BBD has mainly been applied to optimize extraction of several analytes, synthesis/ derivatization reactions, chromatographic separation and electrochemical processes, but has not yet been applied for esterification of octanoic acid with isopropanol.
Therefore, the objective of this paper is to describe the development of an optimized process through a Box-Behnken design for the synthesis of isopropyl octanoate from affordable chemicals, exploring the catalytic properties of layered zinc octanoate. 14Zinc octanoate was chosen for this work: (i) to ensure that no other ester with number of carbons different than eight would be formed due to eventual structural anions exchange (between acid and catalyst) during esterification, and (ii) because zinc-based catalysts showed good catalytic activity during esterification of fatty acids and could be easily recovered and reused. 10
Synthesis of zinc octanoate
First, sodium octanoate was produced by reacting 120.3 mmol of octanoic acid, (solubilized in 40 mL of methanol) with a stoichiometric amount of a NaOH methanolic solution, previously prepared at 50 °C under vigorous stirring.The sodium octanoate precipitate was solubilized with the addition of 50 mL of distilled water at room temperature under magnetic stirring.
Regarding the synthesis of zinc octanoate, an anhydrous zinc chloride (Vetec, 98.0%) solution prepared with 60.15 mmol in 100 mL of distilled water was slowly added, under magnetic stirring, to the solution containing sodium octanoate.At the end of the addition, the mixture was kept under stirring for 30 minutes.The white solid obtained was washed and centrifuged twice with distilled water and then dried in a vacuum oven at 60 °C until constant weight.The total yield of this synthesis was 97.44%.
Characterization techniques
X-ray powder diffraction (XRD) was used to characterize the structure of zinc octanoate.The experiments were conducted on a Shimadzu XDR-6000 diffractometer, using CuKα radiation of λ = 1.5418Å, current of 30 mA, tension of 40 kV and dwell time of 2° min -1 .The samples were placed in glass sample holders and lightly hand pressed so that the crystals were perfectly set in the holder's plane.
To verify that the structure formed was consistent with that of metal carboxylates, the vibrational modes present in the compound were analyzed by Fourier transform infrared spectroscopy (FTIR).Measurements were made in KBr (spectroscopic grade, Vetec) discs and collected in a Bio-Rad FTS 3500GX spectrophotometer, in the 400-4000 cm -1 range, with a resolution of 4 cm -1 and accumulation of 32 scans.
Thermal analysis measurements (simultaneous thermogravimetry, TGA, and differential scanning calorimetry, DSC) were performed in a Netzsch STA 449 F1 Jupiter analyzer under synthetic air flow of 50 mL min -1 , using alumina crucibles, heating rate of 10 °C min -1 and temperatures ranging from 30 to 1000 °C.High resolution DSC analyses were obtained in a Netzsch 200F3 calorimeter using aluminum crucibles, N 2 atmosphere with flow of 20 mL min -1 , heating/cooling rates of 10 °C min -1 , and temperature range from 20 to 155 °C.
Nuclear magnetic resonance (NMR) spectra in the solid state were acquired using a Bruker AVANCE 400 spectrometer operating at 9.4 Tesla, observing 13 C nuclei at 100.6 MHz, equipped with a multi-core probe for solids (MAS) of 4 mm with spin in the magic angle at 5000 Hz. 13 C NMR spectra were acquired through the application of 90° excitation pulses, followed by cross-polarization for 2.0 ms, high power decoupling during acquisition (0.04 s) and relaxation time of 5 s. 13 C NMR spectra were acquired with 2048 points and accumulation of 1024 for each sample.Spectra were processed by the application of an exponential multiplication in free induction decay (FID) by a factor of 50 Hz, followed by Fourier transformation with 4096 points (real spectrum size, RSS).
Catalytic activity
The catalytic activity of zinc octanoate was investigated for the esterification of octanoic acid (98% of purity) with isopropanol (98% of purity).Tests were performed in a Büchi GlassUster miniclave drive pressurized steel reactor with external rotation at 500 rpm.Temperature control was achieved with a circulatory heating system (Julabo HE-4) coupled to the reactor.For the tests in the reactor, the reaction conditions were established from a Box-Behnken design with three levels and three variables, in which the influence of the variables temperature, molar ratio (isopropanol/octanoic acid) and percentage of catalyst were evaluated.To check the method's repeatability, the central point was evaluated in triplicate.
Levels of the upper and lower limits for each variable were: temperature 165 and 145 °C (central point of 155 °C), molar ratio (MR) 10:1 and 6:1 (central point with molar ratio of 8:1) and percentages of catalyst relative to the mass of octanoic acid were 10 and 4% (central point 7%).Therefore, 15 experiments were carried out, with three of them being related to the center point.
The reactions were conducted through the following steps: the octanoic acid, isopropanol and catalyst were introduced in the reactor, which was tightly sealed and heated, reaching the reaction temperature after times of 35, 45 and 55 minutes approximately, for temperatures of 145, 155 and 165 °C, respectively.At the end of the scheduled period for reaction (2 hours), the system's temperature was decreased with the aid of a ventilator for about 20 minutes until mild condition, and then the reaction mixture was transferred to a 100 mL volumetric flask.The excess alcohol was removed by rotary evaporation under reduced pressure, at 80 °C for isopropanol.The system pressure was governed by the vapor pressure of isopropanol, the most volatile component in the reaction medium.For reaction temperatures of 145, 155 and 165 °C, the pressures were respectively 6, 8 and 10 bar.
The conversion of octanoic acid into isopropyl octanoate was measured by the remaining acid quantification method (Ca-40-method) of the American Oil Chemists' Society (AOCS), 15 which involves using a solution of NaOH 0.1 mol L -1 , standardized with potassium biphthalate.The results were expressed as percentage conversion to ester, based on a well-established commercial sample, in this case the tested octanoic acid.
These reactions can also occur due to the influence of temperature, 16 so heat conversion tests were carried out following the BBD above, but ignoring the variable percentage of added catalyst.This study resulted in a total of nine experiments.The catalytic activity of zinc octanoate was measured by comparing the results of conversion to isopropyl octanoate ester, obtained in each condition, between catalyzed reactions and their respective thermal conversions without the addition of catalyst to the reaction medium.
Three experiments of isopropyl octanoate synthesis catalyzed by zinc octanoate as a function of time were carried out in a 300 mL PARR stainless steel reactor, equipped with a special valve which allowed to take 1.5 mL homogeneous aliquots from each reaction at 0, 30, 60, 90, 120, 180 and 240 minutes without having to stop the reaction.
Statistical analysis
The experimental results were analyzed through a response surface method generated by the Design-Expert 7.1 software (Stat-Ease Inc., USA).Model fit quality was evaluated by analysis of variance (ANOVA) and determination coefficients.The basic model equation used to fit the data was: where: Y = desired response; X 1 , X 2 and X 3 = independent variables representing temperature, molar ratio and catalyst percentage, respectively; β 0 = constant; β 1 , β 2 and β 3 = coefficients representing the linear weight of X 1 , X 2 and X 3 , respectively; β 12 , β 13 and β 23 = coefficients representing the interactions between the variables; β 11 , β 22 and β 33 = coefficients representing the quadratic influence of X 1 , X 2 and X 3 ; ε = pure error. 17
Results and Discussion
From the X-ray diffraction patterns (Figure 1A), it is possible to observe the typical presence of basal peaks, corresponding to reflection planes in the direction of the plane formed by the interaction between zinc ions and octanoate anions along the crystallographic axis "a".
The diffraction peaks have a uniform distribution of distances between them, and they can be observed in the region between 3 and 15° in 2θ.All diffraction peaks were indexed in accordance with the literature. 18o calculate the basal cell parameters of zinc octanoate while avoiding errors attributed to the sample displacement from the center of the diffraction goniometer and larger errors at peaks positioned at lower diffraction angles, the reported procedure was used. 19,20A basal distance of 20.83 Å was obtained (see Figure S1 and Table S1 in Supplementary Information), in perfect agreement with the literature. 18The structuring of the layer occurs with the coordination of carboxylate groups to zinc, a fact that can be confirmed by analysis of the infrared spectrum from the compound, where there is a predominance of vibrational modes characteristic of the organic part of the structure of octanoic acid salts (Figure 1B).
The difference in wavenumber (∆ν) between asymmetric (1550/1530 cm -1 ) and symmetric (1408/1398 cm -1 ) stretching bands with a value of 130 cm -1 indicates that the carboxylate group is coordinated in the shape of a bridge between two metal centers. 21Therefore, the layer formation occurs by interaction of oxygen atoms present at the end of the carboxylate with a distinct metallic zinc center.
The doublets from 1500 to 1400 cm -1 correlate with a reduction of symmetry, relative to the strengths of both zinc-carboxylate bond and Van der Waals interactions between hydrocarbon chains. 21Also, according to the same authors, the zinc-carboxylate bond is favored for short-chain carboxylates (with up to nine carbons), resulting in a stronger interaction between metal atoms, leading to the splitting of absorption bands.
The absence of a carbonyl absorption band at 1730 cm -1 , along with the presence of new bands in the region of approximately 1500 to 1400 cm -1 , also corroborate the fact that there is complete resonance between C−O bonds of the carbonyl group, as a result of coordination between the zinc atoms and the carboxylate ions.
In addition, the absence of hydroxyl absorption bands in the region of 3500-3300 cm -1 confirms that the obtained zinc octanoate is anhydrous.][23] The TGA/DSC curves of zinc octanoate (Figure 2) show that the material is anhydrous, without mass loss up to 180 °C, and the decomposition process occurs in one mass-loss step between 200 and 450 °C, characteristic of the oxidation process of organic material and formation of ZnO.
The DSC curve corroborates what was observed in the TGA curve, where an intense exothermic peak was observed with minimum at 383 °C, assigned to the burning of organic matter from the sample, and a broad exothermic band at 632 °C, attributed to the process of ZnO crystallization.Endothermic peaks in the region of 100 °C will be discussed later in section on the DSC curves obtained by a high resolution device.In the present TGA curve, the results were compatible with the ideal formula for zinc octanoate (Zn[C 7 H 15 COO] 2 ), where the observed ZnO percentage was 23.92%, consistent with the theoretical percentage of the proposed formula (23.13%). 18
Catalytic activity
As described in the literature, after reaching 136 °C (melting point of zinc octanoate) during the reaction, a mixture of fragments of the original structure is obtained, where the carboxylate group is coordinated to the metal through two modes: (i) bidentate bridge and (ii) monodentate mode. 11,24,25After cooling the system, the catalyst is restructured and separated after the removal of isopropanol in excess, which was used during the reaction.This catalyst melting/dispersion occurs inside the reactor, even under the influence of pressure generated internally by the vapor from the more volatile component in the media, in this case the alcohol used in the reaction.It is important to mention that the alcohol worked both as a reagent and a solvent of octanoic acid.The solvation process of reactants and dispersion of catalyst clusters by isopropanol probably occurred by various intermolecular forces, such as ion-dipole attractions, Van der Waals forces and hydrogen bondings. 26Table 1 shows the results for conversion of acid to ester.
The greatest value of thermal conversion to ester was 36.83% at 165 °C with 10:1 molar ratio, and the lowest value was 16.96% at 145 °C and 6:1 ratio.The results in all conditions served as comparative bases for the same conditions with addition of the catalyst in the reaction medium.All results (discounting their respective thermal conversions) were positive, which indicates effective contribution of zinc octanoate to convert octanoic acid to From these results it was possible to establish the positive or negative influence factors, in percentage points, for each variable or interaction between them.The analysis for isolated effects, as well as for combined interactions between each variable (Table 2), provides important data about the magnitude of each term.
Of all the terms, the temperature was the most important factor (12.70 pp).This fact is confirmed by observing experiments 1 and 2 (Table 1), for example: under constant conditions of catalyst percentage and MR, but increasing the temperature from 145 to 165 °C, the conversion rate rose from 38.44 to 70.08%.The same trend was observed in experiment pairs 3/4, 5/6 and 7/8 where the same type of change in temperature led to changes in the conversions from 43.09, 49.71 and 39.89 to 66.47, 61.21 and 74.99%, respectively.
Of all the independent variables, the molar ratio (1.71 pp) presented the least relevance.This fact can be verified by observing that although the increase of the MR led to slight increases in the values of the conversion in reactions at 145 °C and 7% catalyst, and 155 °C with 10% catalyst, there were reductions in conversion values observed at 165 °C and 7% catalyst and 155 °C with 4% catalyst.
For the percentage of catalyst (3.30 pp), a positive effect in the esterification reactions was also noted, but of lower magnitude in relation to the effect of temperature in the system.The increase in catalyst percentage caused both increases (pairs 9/11, 10/12) and decreases (pair 5/7) of the observed conversion values.
The temperature:MR interaction had a slightly negative effect on the system (−2.07pp), while the temperature:catalyst (5.90 pp) and catalyst:MR (4.09 pp) interactions had positive effects for esterification reactions.The squared terms MR 2 and Catalyst 2 also had slightly negative effects in the system, with the exception of the term temperature 2 .In order to assess which terms were really significant, the results were submitted to analysis of variance (ANOVA) (Table 3).
p-Values higher than 0.05 are considered insignificant, both for the model and for its respective terms. 27The ANOVA revealed that only the term C 2 showed no significance.Therefore, this term was removed prior to selecting an appropriate statistical model.Multiple regression analysis was employed on the data, and between the models as suggested by the software (linear, two factor interaction (2FI), quadratic and cubic).The quadratic model was selected as the most suitable, due to the highest significance order among all available models. 28he model's final equation, of mean 52.26 The coefficient of determination (R 2 ) is the proportion of variation in a determined response that is attributed to the model instead of random errors.A well-adjusted model must possess an R 2 value equal or higher than 0.90.When R 2 is close to 1, this means that the empirical model is perfectly suitable for the obtained data. 28he R 2 value for the response (0.962) was greater than 0.90, indicating the good quality of the obtained models.Adding a variable to the model will always increase the R 2 value, regardless of its statistical significance.Thus, a high R 2 value does not necessarily mean that the model to which it corresponds is suitable for evaluation of response surfaces or for response optimization.Hence, it is better to use an adjusted R 2 (adj-R 2 ) of over 0.90 to evaluate adequacy of a model. 28The obtained adj-R 2 value, for catalyzed conversion, was 0.952.This is very important, since a high adj-R 2 value means that insignificant terms were not included in the model.
The coefficient of variation (CV) describes the extent of data dispersion.As a general rule, a CV must be below 10%.The CV value obtained in this work (4.81%) did not exceed this limit, indicating acceptable levels of precision and reliability in the experiments.
Figure 3 shows the response surface graphs for the factors temperature, MR and percentage of catalyst against the response (catalyzed conversion).
For all surfaces, it is possible to perform a graphic interpretation of the magnitude of the effect regarding each variable.For example, the slope of the surface on the temperature axis is consistent with the fact that this variable is the most relevant to the esterification system, while the slight slope on axis B (MR) denotes the low importance of MR.The greatest response values were observed in the surface regions from 7 to 10% catalyst.
Optimal points for each model were obtained by selecting some desirable parameters, such as smaller quantities of reactants and catalyst, but giving maximum priority for high conversion values.Therefore, the suggested point for catalyzed conversion was 165 °C, MR 6:1 and 5.97% catalyst, with a predicted conversion of 65.42%.This point was validated by tests employing the proposed conditions, and an experimental value of 64.76 ± 1.22 was obtained.Therefore, the model is suitable to predict the influence of reactional parameters regarding esterification of octanoic acid with isopropanol.
Conversion of isopropyl octanoate in function of time
In order to evaluate the time required to reach reaction equilibrium, the synthesis of isopropyl octanoate catalyzed by zinc octanoate was performed as a function of time in three different temperatures, with isopropanol:octanoid acid MR of 8:1, 500 rpm of stirring speed and 7% of catalyst (Table 4).The zero time indicated in Table 4 corresponds to the instant at which the reactor has reached the specified temperature.
At this point, for all temperatures, the percentage of isopropyl octanoate in the reaction medium was not greater than 5%.For the reactions at 140 and 150 °C, it was also noted that the catalyzed conversion shows significant variations from 60 to 90 minutes of reaction time.From 180 minutes of reaction and on, the obtained catalyzed conversions in all cases did not show large statistic differences between themselves, and it can be said that at the time of 240 minutes, the system already had reached the situation of chemical equilibrium.From these experiments, it can be also confirmed the significant effect of temperature during esterification, since at 117 °C no catalyzed conversions greater than 18% were obtained, even after 240 minutes of reaction, while at 140 and 150 °C approximately 19.42 and 24.97% of conversion, respectively, were obtained just after 30 minutes of reaction time.The best catalyzed conversions were achieved at 150 °C, but the ester values from 180 to 240 minutes are not so different from their counterparts at 140 °C.
Tests for reuse of zinc octanoate
From the results of isopropyl octanoate synthesis using zinc octanoate as catalyst, and by observing the catalyst restructuring after the reactions, tests of reuse were carried out to check whether after the first reaction zinc octanoate continued to be active catalytically.The conditions for catalyst reuse tests were: two hours of reaction time, 165 °C, MR 8:1 and 10% catalyst (initially, about 3 g of catalyst was used, and the amounts of the other reagents were adjusted for this catalyst amount).
The values obtained for the first and second reuse of zinc octanoate with these conditions were, respectively, 61.51 and 55.10%.The recovered catalyst masses are 2.78 g from the first reuse and 2.51 g from the second reuse.Therefore, considering the initial catalyst mass of 3 g, the percentages physical catalyst losses are: 7.25% from the initial experiment to the first reuse, 9.66% from the first to the second reuse and 16.33% from the initial experiment to the second reuse.The results obtained after three reaction cycles, employing the same reuse conditions, showed that zinc octanoate retains its catalytic activity after the first reaction, with a small decrease in the values of conversion in the next two reaction cycles.This behavior can be explained by the fact that not all the mass of the catalyst was recovered, and despite the adjustments for amounts of reagents, the volume of the reactor remained constant, which may have led to a greater amount of isopropanol in the vapor phase, reducing the conversion values.Even so, the progressive loss of catalytic activity from heterogeneous catalysts after consecutive reuses still remains a challenging issue. 29o verify whether there was any structural modification of the catalyst after reactions, we applied the XRD and FTIR techniques (Figure 4).Through X-ray diffraction patterns of zinc octanoate recovered after the reaction (Figure 4A-b), basal peak broadening and splitting were observed in at least two peaks.This effect can be attributed mainly to the speed of the reactor cooling, which induces stacking faults during the crystallization (see insert in Figure 4A).This theory is supported by the fact that, in accordance with the FTIR spectra of zinc octanoate after reactions, no energy changes of vibrational modes regarding the constituent groups of its structure were observed when compared to the original, and new functional groups were not detected.
The zinc octanoate recovered after the first use (Figure 4A-b) was subjected to heat treatment at 140 °C and the melted material was slowly cooled to induce better layer packing and minimize the faults.The X-ray diffraction pattern (Figure 4A-c) shows that this objective was partially achieved, since the split basal peaks changed to a broader, but single, diffraction peak.The heat treatment also did not promote energy changes of the vibrational modes regarding zinc octanoate constituent groups when compared to the original substance (Figure 4B-c).
Although diffraction peaks became narrower, indicating better structural order and increase in the size of crystals, the zinc octanoate structure was preserved after two reaction cycles, as indicated both by XRD (Figure 4A-d) and FTIR (Figure 4B-d), demonstrating that the material can be recovered intact and reused.
According to DSC curves (Figure 5), different phase transitions occurred during the heating of zinc octanoate, recovered from the second esterification cycle, up to 155 °C, as well as during its cooling.
For example, after the first heating cycle, two endothermic transitions were noted, assigned to layered crystal-phase I (enantiotropic transitions at 95 °C) and phase I-isotropic liquid (melting at 127 °C). 30,31In the first cooling, two exothermic transitions were observed.Besides the observations regarding XRD analysis, it was also possible to see the positive effect of heating on the structural organization of recovered zinc octanoate, by the second and third DSC heating cycles, since these cycles showed three endothermic transitions, associated with a more organized layered structure: layered crystal-phase I (62 °C), phase I-phase II (90 °C) and phase II-isotropic liquid (122 °C).The second and third cooling cycles presented profiles similar to the first cooling cycle, with two endothermic transitions.
Although the catalyst was recovered after each catalytic cycle, it is important to emphasize that the catalytic activity experiments were carried out at temperatures higher to the phase transitions observed in Figure 5, so the zinc octanoate is "melted" in the reaction media, acting approximately in a hydrophobic homogeneous way, just like what occurs for other layered carboxylates. 11,25,32igures 6A and 6B show the 13 C NMR spectra of synthesized zinc octanoate, as well as the compound which was recovered from esterification reactions and submitted to heat treatment at 140 °C.
Both spectra are consistent with that determined for zinc carboxylates. 14,22,23The chemical shift of the carbonyl group is assigned to the signal at 184.87 ppm, whereas the observed resonance at 14.84 ppm is attributed to the methyl carbon from the end of the carbon chain.The signals between 24 and 37 ppm correspond to the carbons from the middle of the chain.Therefore, numbering the carbons from the end of the chain until the carbon of the carbonyl gives the following order: C1 Even though there were some differences in the packing of the layers, as observed by XRD, no significant differences were noted between the overlapping 13 C NMR spectra of synthesized and recovered/heated zinc octanoate, just like what was determined by FTIR.
Conclusion
Anhydrous zinc octanoate (Zn(C 7 H 15 COO) 2 ) was synthesized and characterized, and showed promising catalytic activity for the synthesis of isopropyl octanoate, especially at 165 °C, isopropanol:octanoic acid molar ratios of 8:1 and 6:1, and with 7 and 10% catalyst.The results corresponding to the center point from the experiment should also be noted, because they presented significant conversion gains.
The analysis of variable interaction effects showed that temperature is the factor that most influences the system, followed by the synergic effects of temperature:catalyst and molar ratio:catalyst.Even so, according to ANOVA only the term Catalyst 2 was not considered a significant term.All others had some importance to the system.
The ANOVA also allowed obtaining a mathematical model that, expressed through response surfaces, allowed visualizing the information obtained in the analyzing the interaction of the variables.For this model, optimized conditions were obtained for the production of isopropyl octanoate using zinc octanoate as a catalyst: 165 °C, molar ratio 6:1 and 5.97% catalyst.
After the experiments, the catalyst was recovered and the obtained solid presented stacking faults, attributed mainly to imperfect layers's packing in the crystals.However, when the recovered zinc octanoate was reused, much more organized structures were obtained.Despite the difficult recovery of material, the recovered catalyst maintained its catalytic activity after two reaction cycles, demonstrating that zinc octanoate is a promising material for industrial applications.
Figure 4 .
Figure 4. (A) XRD patterns and (B) FTIR spectra of zinc octanoate, (a) as synthesized; (b) after the first use; (c) after heating of b at 140 °C and (d) after the second use.
Figure 6 .
Figure6.13 C NMR spectra of (a) synthesized zinc octanoate and (b) after use followed by treatment at 140 °C.(A) Full spectrum and (B) expanded spectrum.
Table 2 .
Effects of isolated variables and their interactions
Table 3 .
Analysis of variance
Table 4 .
Results obtained of isopropyl octanoate conversions in function of time, using different reaction temperatures | 2018-09-13T09:50:14.347Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "d988b80a2485f14eff8ab295a151700fbbc2b6de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21577/0103-5053.20160251",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d988b80a2485f14eff8ab295a151700fbbc2b6de",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
260304436 | pes2o/s2orc | v3-fos-license | An ependymin-related blue carotenoprotein decorates marine blue sponge
Marine animals display diverse vibrant colors, but the mechanisms underlying their specific coloration remain to be clarified. Blue coloration is known to be achieved through a bathochromic shift of the orange carotenoid astaxanthin (AXT) by the crustacean protein crustacyanin, but other examples have not yet been well investigated. Here, we identified an ependymin (EPD)-related water-soluble blue carotenoprotein responsible for the specific coloration of the marine blue sponge Haliclona sp. EPD was originally identified in the fish brain as a protein involved in memory consolidation and neuronal regeneration. The purified blue protein, designated as EPD-related blue carotenoprotein-1, was identified as a secreted glycoprotein. We show that it consists of a heterodimer that binds orange AXT and mytiloxanthin and exhibits a bathochromic shift. Our crystal structure analysis of the natively purified EPD-related blue carotenoprotein-1 revealed that these two carotenoids are specifically bound to the heterodimer interface, where the polyene chains are aligned in parallel to each other like in β-crustacyanin, although the two proteins are evolutionary and structurally unrelated. Furthermore, using reconstitution assays, we found that incomplete bathochromic shifts occurred when the protein bound to only AXT or mytiloxanthin. Taken together, we identified an EPD in a basal metazoan as a blue protein that decorates the sponge body by binding specific structurally unrelated carotenoids.
Marine animals display diverse vibrant colors, but the mechanisms underlying their specific coloration remain to be clarified.Blue coloration is known to be achieved through a bathochromic shift of the orange carotenoid astaxanthin (AXT) by the crustacean protein crustacyanin, but other examples have not yet been well investigated.Here, we identified an ependymin (EPD)-related water-soluble blue carotenoprotein responsible for the specific coloration of the marine blue sponge Haliclona sp.EPD was originally identified in the fish brain as a protein involved in memory consolidation and neuronal regeneration.The purified blue protein, designated as EPD-related blue carotenoprotein-1, was identified as a secreted glycoprotein.We show that it consists of a heterodimer that binds orange AXT and mytiloxanthin and exhibits a bathochromic shift.Our crystal structure analysis of the natively purified EPD-related blue carotenoprotein-1 revealed that these two carotenoids are specifically bound to the heterodimer interface, where the polyene chains are aligned in parallel to each other like in β-crustacyanin, although the two proteins are evolutionary and structurally unrelated.Furthermore, using reconstitution assays, we found that incomplete bathochromic shifts occurred when the protein bound to only AXT or mytiloxanthin.Taken together, we identified an EPD in a basal metazoan as a blue protein that decorates the sponge body by binding specific structurally unrelated carotenoids.
Marine life displays a variety of striking colors, some of which depend on hydrophobic compounds known as carotenoids (1,2).Certain carotenoids bind to proteins called carotenoproteins.The most well-studied carotenoproteins in marine animals are crustacyanins, which are present in crustaceans.A crustacyanin was first purified from a lobster in 1965 (3).Crustacyanins are nonglycosylated water-soluble blue proteins that are responsible for the blue coloration of crustacean shells.Structural analyses of a crustacyanin in the lipocalin family (4) revealed the presence of noncovalently bound astaxanthin (AXT) in the heterodimer subunits.
Carotenoproteins have been detected in various marine animals, including actomyosin in salmon (5), asteriarubin in starfish (6), and ovorubin in the eggs of gastropod snails (7).These carotenoproteins, including crustacyanins, are tightly fixed in the bodies or muscle tissues (1).Although crustacyanins have been extensively studied to elucidate details of the protein structure and the mechanism of the carotenoids' bathochromic shift (4), relatively little is known about carotenoproteins in other marine species.
Marine sponges are basal metazoans belonging to the phylum Porifera.Similar to other marine organisms, marine sponges display a variety of vibrant colors owing to the presence of carotenoids (8).However, the mechanisms underlying this coloration scheme remain unclear.To the best of our knowledge, four carotenoproteins have been isolated from marine sponges so far (9)(10)(11).A water-soluble blue carotenoprotein (BCP) was isolated from the blue sponge Suberites domuncula (9).It binds to the monohydroxy and monoepoxy carotenoids with an absorption maximum at 590 nm.A water-insoluble carotenoprotein was purified from an orange sponge, Axinella verrucose (10).However, its primary structures, including the amino acid sequences and their encoding genes, have not been determined.
Our recent study found that water-soluble carotenoproteins are present in marine organisms.In the present study, we purified and characterized the blue protein responsible for the coloration of the marine blue sponge, Haliclona sp.To the best of our knowledge, the purified protein is the first reported carotenoid-binding protein in the ependymin (EPD) protein family.EPD was first discovered in the ependymal zone of goldfish brain following its enhanced expression after learning events (20)(21)(22).Since then, EPD orthologs known as EPDrelated proteins (EPDRs) have been identified in a variety of organisms ranging from basal metazoans to humans (23).Although the roles of EPDs are largely unknown, several studies have reported their involvement in memory consolidation and learning (22,24,25), optic nerve regeneration (26), and human brown fat cell development (27).Recent crystallographic studies examined EPDR1, a member of the mammalian EPDR (mammalian EPD-related protein) family (28,29).They found an antiparallel β-sheet forming a deep hydrophobic pocket, with a possible function to accommodate lipids.Here, we describe the identification and the crystal structure analysis of EPD-BCP1, a novel BCP belonging to the EPD protein family.The characterization of the purified protein revealed that the protein is a heterodimer that accommodates two carotenoids, AXT and mytiloxanthin (MXT).In contrast to the assumption based on the structural studies of EPDR1, our structure analysis of EPD-BCP1 reveals that the two carotenoids are bound to the interface of the heterodimer, rather than the hydrophobic pocket of the β-sheets.Based on the structure and amino acid sequence analyses, the potential mechanisms of the bathochromic shift and the phylogenetic lineage of this protein are discussed.
Identification of protein and pigments
The blue extract was collected by manually squeezing the sponge body.The color of the freshly squeezed extract was the same as that of the body, and the body turned faint brown Sponge blue protein belonging to the ependymin family after squeezing out the extract (Fig. 1B).The blue aqueous supernatant was subjected to a gel-filtration column chromatography, from which a single peak representing the blue fraction was obtained (Figs. 1, C and D and S3).Separation of the purified blue protein by SDS-PAGE under reducing and nonreducing conditions revealed a single band with an apparent molecular mass of 19 kDa and 35 kDa, respectively (Figs. 1D and S3).Based on the retention time in gel filtration, the apparent molecular mass of the native protein was estimated at 40 kDa, indicating that the blue protein is a dimer.The purified protein showed absorption maxima at 280 and 557 nm.The broad absorption maximum at 557 nm coincided with that of the crude extract from the sponge body (Figs. 1, C and E, and S3), indicating that the blue protein constitutes the primary color of the blue sponge.The pigments bound to the blue protein were extracted using the Bligh-Dyer method (36).The organic phase was orange.
Separation of the organic phase by HPLC on a C 18 reversedphase column yielded two peaks, P1 and P2 (Figs. 2B and S3).The compound corresponding to each peak was analyzed by LC-MS.The major component corresponding to P1 exhibited a broad absorption maximum at 478 nm (Figs.2B and S3), and its predicted formula was C 40 H 52 O 4 ([M + H] + at m/z = 597.3941,error = 0.0002), and it had the same retention time as that of the AXT standard (18,19).The 1 H NMR spectrum was also compatible with that of standard AXT (Table S1).The chirality of P1 was determined to be (3S,3 0 S)-AXT using a Sumichiral OA-2000 column (Fig. 2C).The compound corresponding to P2 exhibited a broad absorption maximum at 474 nm, its predicted formula was C 40 H 54 O 4 ([M + H] + at m/z = 599.4095,error = 0.0001), and the 1 H NMR spectrum was compatible with that of MXT (Table S2) (37)(38)(39).Consequently, the compound corresponding to P2 was identified as MXT (Fig. 2C).
The blue protein belongs to the EPD family
The blue proteins purified from the samples collected between 2016 and 2021 showed similar absorption peaks, molecular weights, and carotenoid compositions.The N-terminal amino acid sequences of the purified blue proteins were VPTXKETPPQWSGD (obtained from the band around 19 kDa) and QAPTXTD (obtained from the band around 20 kDa) (Fig. 2A).The genes encoding these N-terminal amino acid sequences were detected in the de novo sequencing data of the complementary DNA (cDNA) libraries, and the full-length cDNAs were obtained by PCR.The amino acid sequences deduced from the cDNAs conserved N-terminal hydrophobic signal sequences and putative N-glycosylation (Asn-x-Thr) sites, and the two proteins showed 50% amino acid sequence identity to each other (Fig. 2A).A putative EPD domain (accession no.: pfam00811) was detected through a BLAST search.The blastp top hit was a protein with unknown function encoded in the genome of the demosponge Amphimedon queenslandica (40) (accession no.: XP_003389285, 39% identity).An N-terminal hydrophobic peptide was detected for each protein using the SignalP program, and the cleavage sites were determined by N-terminal amino acid sequencing between Ala16 and Val17 (in EPD-BCP1α) and between Ser19 and Gln20 (in EPD-BCP1β) (Fig. 2A).Periodic acid-Schiff staining revealed that the purified protein was glycosylated (Fig. 2D).These results suggest that the purified blue protein is located outside the plasma membrane like other EPD family proteins (41).Microscopy analyses showed that the blue pigment was localized around the skeletons (Fig. 2E).EPD-BCP1 conserved four cysteine residues that are commonly conserved in EPD family proteins and are involved in disulfide-bond formation causing dimerization (Figs.2A and 6B).We designated this protein as EPD-BCP1 (EPD-related AXT-and MXT-binding BCP1), with its subunits abbreviated as EPD-BCP1α and Sponge blue protein belonging to the ependymin family EPD-BCP1β, based on its pigment-binding properties and the homology search results.
Crystal structure of EPD-BCP1
The crystal structure of the natively purified EPD-BCP1 was determined by molecular replacement with the Alpha-Fold2 model (42) and refined to 2.44 Å resolution (Fig. 3A).The asymmetric unit contains four dimers, of which the one with the lowest average B-factor is described here because the structures of all the dimers are nearly identical.The electron density map delineates N-linked glycosylation on Asn174(α), Asn51(β), and Asn178(β), where two N-acetyl-D-glucosamine molecules (N-acetyl-β-D-glucosaminyl-(1→4)-N-acetyl-β-Dglucosamine) were modeled on each asparagine residue.As predicted from the amino acid sequence (Fig. 2A), it seems that Asn92(β) is also glycosylated, but N-glycan was not modeled because of the ambiguity of the electron density in this region.While three pairs of disulfide bonds were found in the crystal structures of EPDR1 (28,29), EPD-BCP1 lacks two of these cysteine residues in the loop between strands β3 and β4, and at C terminus, resulting in the conservation of two pairs of cysteine residues in each subunit involved in intramolecular disulfide bond formation.The structures of the α and β subunits are very similar with a root mean square deviation of 1.5 Å for 168 Cα atoms, sharing a curved antiparallel β-sheet composed of 11 β-strands (β6-1 and β11-7).Structure comparison of the dimers between EPD-BCP1 and EPDR1 (Protein Data Bank ID: 6E8N) revealed that the two proteins share identical topology of the β-sheet with a root mean square deviation of 3.8 Å for 320 Cα atoms.The outward concave surface of the dimer is formed by all the β-strands except for β7, whereas the inner surface includes the dimer interface composed of eight β-strands (β3-1 and β11-7), which form a tunnel together with the counterpart subunit.
The electron densities corresponding to the two carotenoids are found at the interface between the two subunits.The two carotenoids were identified as AXT and MXT based on the electron density map (Fig. S4).Like in β-crustacyanin (β-CR), the AXT bound to EPD-BCP1 is in the 6/6 0 -s-trans conformation (4) in contrast to free AXT, which is mostly in a 6/6 0 -scis conformation (43).In addition, the βand κ-end rings of the bound MXT are rotated by around −35 and 170 , respectively, compared with the geometry-optimized NMR structure of free MXT using density functional theory (DFT) calculations at B3LYP/def2-TZVP (Fig. S5).Both the βand β 0end rings of AXT in EPD-BCP1 are coplanar with the polyene chain as observed in AXTs in β-CR, resulting in the extension of the conjugation system of polyene.In contrast, the two end rings in MXT are noncoplanar with the polyene chain.This should have little effect on the extension of the conjugation system because the κ-end ring has no double bond, and the double bond in the β-end ring is the second adjacent to the C7-C8 triple bond.In EPD-BCP1, AXT and MXT are aligned in a way that one of the pseudo-twofold axes shown in Figure 3C is perpendicular to that of the heterodimer, whereas two of the pseudo-twofold axes in β-CR are coincident (Fig. 3B).As a common feature of the orientation of the two carotenoids in EPD-BCP1 and β-CR, two polyene planes are nearly parallel with a minimum distance of 7 Å, although the carotenoids are slightly bowed outward in EPD-BCP1 but inward in β-CR (Fig. 3C).The intersection of the two carotenoids in EPD-BCP1 is around the C14-C15 bond with the C9(MXT)-C15(MXT)-C15(AXT)-C9(AXT) dihedral angle of −141 , whereas that in β-CR is more off-centered around the C12 0 -C13 0 bond with the C6 0 (AXT1)-C12 0 (AXT1)-C12 0 (AXT2)-C6 0 (AXT2) dihedral angle of 127 (Fig. S6).
Interaction between protein and carotenoid molecules
The α and β subunits contribute almost equally to the interaction with each carotenoid molecule (Fig. 4, A-C).Namely, the α-subunit-AXT and β-subunit-AXT interfaces share 48.1% and 45.4% of the total solvent-accessible surface area of AXT, respectively.Similarly, both α-subunit-MXT and β-subunit-MXT interfaces share 46.4% of the total solventaccessible surface area of MXT.The two carotenoids are accommodated by symmetrically aligned amino acid residues from the two subunits, although the equivalent residue pairs are poorly conserved.
Notably, nine residues occupy the interspace between AXT and MXT in EPD-BCP1 (Fig. 4A), in contrast to β-CR, where no amino acid residues were found between the bound AXTs, but a hydrocarbon molecule derived from the paraffin oil used in crystallization was found instead (4).The nine residues in the interface are Leu44(α), Trp34(α), Thr169(α), Ser158(α), and Tyr140(α), and the equivalent residues in the β subunit are Val47(β), Asn38(β), Tyr173(β), and Thr162(β) (but not Phe144(β)).The side chains of Trp34(α), Tyr162(β), Thr169(α), Ser158(α), Tyr140(α), and Asn38(β) form a hydrogen-bond network together with the C8 0 hydroxy group in the polyene chain of MXT, which is likely to contribute to the specific interaction between MXT and the binding site, and the polarization of MXT.In addition, the C3 0 hydroxy group in the κ-end ring of MXT forms a hydrogen bond with the hydroxy group of Ser133(α), whereas the β-end ring of MXT is fixed by the aromatic ring of Tyr142(β) with a parallel orientation to the C5-C6 double bond (Fig. 4C).On the other hand, the C3 hydroxy group and the C4 keto group in the β-end ring of AXT show hydrogen bond interactions with Glu131(α) and Arg49(β)/Lys67(β), respectively, whereas the β 0 -end ring of AXT appears to fluctuate more because of its weaker interaction with the protein (Fig. 4B), as revealed by the higher average B-factor of the β 0 -end ring (56 Å 2 ) compared with that of the β-end ring (31 Å 2 ).
Reconstitution assays of apoproteins with orange carotenoids
The carotenoid-free apoprotein was obtained by gently mixing the protein solution with diethyl ether and acetone (44).The colorless apoprotein separated into the aqueous phase, and detached orange carotenoids separated into the organic phase.When the apoprotein was reconstituted with the detached orange carotenoids, the solution turned blue and demonstrated an absorption spectrum almost identical to that of the purified holoprotein (Fig. 5A).
Next, the apoprotein was tested for its selective binding of carotenoids (44).The apoprotein fully recovered the blue color by reconstitution with AXT and MXT, but the color recovery was incomplete with AXT or MXT alone (Fig. 5B).When EPD-BCP1 was reconstituted with fucoxanthin (a marine carotenoid) and canthaxanthin (with a similar structure to that of AXT), each reconstituted protein showed a broad absorbance spectrum with a single peak.The peak did not show a spectral shift, but the shoulder of the peak corresponded to a red shift in each protein (Fig. 5, C and D).These results indicate that EPD-BCP1 preferentially binds AXT and MXT, and the complete red shift is achieved by the binding of both AXT and MXT.
Structural insight into the bathochromic shift of carotenoids
Previous theoretical studies of β-CR by quantum chemical calculations (45)(46)(47)(48)(49)(50)(51)(52) have proposed the following three mechanisms underlying the large red shift in the absorption maximum wavelength (λ max ) of 100 nm: (1) influences of the conformational changes in AXT upon binding to the protein; (2) polarization effects of the protein environment; and (3) effects of excitonic interactions between the two AXTs.While all three mechanisms are likely involved in the red shift, the extent of their contributions is still controversial.To examine the first mechanism of the red shifts of AXT and MXT upon binding to EPD-BCP1, the effects of the end-ring rotations on the excitation energy were investigated.Geometry optimizations of free AXT and MXT in acetone using DFT calculations at B3LYP/def2-TZVP and subsequent calculations of the vertical S 0 →S 2 excitation energies at TD-ωB97X/def2-TZVP gave λ max values of 469 and 457 nm, respectively (Table 1), comparable to the experimental values of 478 and 474 nm (Fig. 2B).In contrast, the geometry optimizations and the excitation energy calculations of AXT and MXT under identical conditions, except for application of dihedral angle constraints to fix the rotations of the two end rings in the protein-bound state, resulted in λ max values of 494 and 455 nm, respectively.The C5/C5 0 -C6/C6 0 -C7/C7 0 -C8C8 0 dihedral angles of the geometry-optimized free and the protein-bound AXTs were −39.9 /−40.0 and 170.6 /164.4 , respectively, indicating that the former is close to a 6/6 0 -s-cis conformation, and the latter is in a 6/6 0 -s-trans conformation and nearly coplanar with the polyene chain, as observed in the crystal structure of β-CR.The calculated red shift of 25 nm (0.13 eV) caused by the conformational change from 6/6 0 -s-cis to 6/6 0 -s-trans is comparable to the values reported in previous theoretical studies on β-CR (45,46,51).The calculated energy cost for fixing the torsion angles of the βand β 0 -end rings of AXT in the protein-bound state was 1.35 kcal/mol, by which the conjugation system of the polyene chain is extended.In contrast to AXT, there was no red shift in the calculated excited energies between free and protein-bound MXT.This confirms that no extension of the polyene conjugation system occurs in MXT, despite the calculated energy cost of 1.41 kcal/ mol for fixing the torsion angles of its βand κ-end rings in the protein-bound state.
Considering the fact that AXT and MXT alone showed red shifts of 71 nm (478→549 nm) and 76 nm (474→550 nm), respectively, upon binding to EPD-BCP1 (Figs. 2B and 5B), we concluded that the intrinsic effect alone (i.e., conformational changes of the carotenoid itself) could not explain the observed large shifts of λ max .To evaluate the polarization effects of the protein environment (the aforementioned second mechanism of the red shift), three-layer ONIOM geometry optimizations at the B3LYP-D4/TZVP/HF-3c level were separately performed for each carotenoid.In these optimizations, the QM1 region was defined as each carotenoid, the QM2 region as the side chains of the nearby hydrophilic and aromatic residues, and the MM region as the rest of the whole protein including N-glycans, as well as the solvent molecules and ions.Subsequent QM calculations of the excitation energies of AXT and MXT at the TD-ωB97X/def2-TZVP level for the geometry-optimized QM1 and QM2 regions resulted in λ max values of 567 and 513 nm, respectively (Table 1), implying that the polarization effects of the protein environment also could not fully explain the large red shift of MXT upon binding to EPD-BCP1.To evaluate the effects of the protein environment on the geometry-optimized ground state structures of AXT and MXT in the protein, we calculated the bond length alternation (BLA) values.The BLA is defined as the average difference between single-and double-bond lengths in a π-conjugated system, and a correlation between the BLA values and the excitation energies has previously been reported (52).The BLA values of the geometry-optimized structures of AXT in acetone and EPD-BCP1 were 0.075 Å and 0.045 Å, respectively, whereas those of the geometry-optimized structures of MXT in acetone and the protein were 0.066 Å and 0.057 Å, respectively.The larger difference in the BLA values of AXT between those in acetone and the protein compared with those of MXT is consistent with the larger calculated red shift of AXT upon binding to the protein.Furthermore, it is interesting to note that the β-end ring side of AXT, which shows more intimate contact with the protein, has a much smaller BLA compared with the β 0 -end ring side (Fig. S7).The same tendency was found for the κand β-end ring sides in MXT, although the difference was more modest than that in AXT.
It is most likely that the direct interaction between AXT and the two positively charged residues, Arg49(β) and Lys67(β), plays a key role in the red shift, although their charges should be neutralized by the two adjacent acidic residues, Asp111(α) and Glu131(α) (Fig. 4B).In the case of MXT, the Nζ atom of Lys45(β) is at a distance of 5.6 Å from the C6 0 carbonyl oxygen, which is not neutralized by any acidic residues.We first assessed the effect of the indirect interaction between Lys45(β) and MXT via a hydrogen-bond network involving four water molecules, which were identified in the QM1/QM2/MM geometry-optimized structure but not in the electron density map.The QM calculation of the excitation energy with the four water molecules included in the QM region gave a λ max of 523 nm (Table 1), which was closer to the experimental value of 550 nm.We further assessed the possibility that a proton of Nζ of Lys45(β) is transferred to the C6 0 keto group of MXT because it has been known that protonation of the C4 keto group of AXT causes a significant red shift (46).The geometry optimization caused a larger displacement of the nearby residues, and subsequent QM calculation of the excitation energy resulted in a λ max of 684 nm, which was rather overestimated compared with the experimental value.
Assuming that each carotenoid binds specifically to their own binding site in EPD-BCP1, it is speculated that difference in λ max between the double-and single-carotenoid-bound EPD-BCP1 would be mainly attributable to the exciton coupling effect.The experimental values of Δλ max between the a The end-ring rotation angles were fixed to those of the protein-bound forms.
b Four water molecules were added in the QM region to connect the amino group of Lys45(β) and the C6 0 keto group of MXT via a hydrogen-bond network.c A proton was transferred from N ζ of Lys45(β) to the C6 0 keto group of MXT.
Sponge blue protein belonging to the ependymin family double-and single-carotenoid-bound EPD-BCP1 were 7 to 8 nm (Fig. 5B), confirming that the effect of exciton interaction of the two carotenoids on the large red shift is marginal compared with that of the polarization effect by the protein environment as previously reported in the theoretical studies on β-CR (48,49).
Phylogeny and diversity of EPD-BCP1
Although true EPDs are restricted to teleosts, the distribution of the EPDR protein family is much broader than originally thought.McDougall et al. (23) examined the phylogeny of EPDRs and reported considerable diversity among bilaterian animals and other eukaryotes.They classified EPDRs on the basis of the pattern of conserved cysteine residues in profiles 1 to 3, which supported the phylogenetic division of EPDRs into "clade 1" and "clade 2".Accordingly, the sponge EPDs encoded in the genome of A. queenslandica were placed in an ancestral clade within clade 1, which included the clades of mammalian ependymin-related proteins and original fish-EPDs (23).A more recent classification based on protein structure indicated that EPDRs belong to the LolA (bacterial proteins related to the LolA lipoprotein transporter)/EPDR superfamily (28).Phylogenetic analysis revealed that EPD-BCP1 constitutes a sponge clade with A. queenslandica EPDR in an early branching metazoan lineage (28) (Fig. 6A) and displays profile-1 in the previously described "clade 1" subgroup based on the conserved cysteine residue patterns (23) (Fig. 6B).To our knowledge, none of these EPDs are identified as color proteins.
Discussion
In this study, a water-soluble BCP was purified from the marine sponge Haliclona sp.Several marine animals harbor orange carotenoids, but they display blue color (8).However, the molecular mechanisms underlying coloration in these animals remain largely unknown.To the best of our knowledge, EPD-BCP1 is the second BCP to be characterized structurally and functionally in marine organisms.Hence, this carotenoprotein may be useful to study the mechanisms underlying the bathochromic shift.
EPD-BCP1 is a member of the EPDR family, whose members have diverse functions.They were originally identified as fish-specific secreted glycoproteins (21,22).Although the functions of protein homologs in the annotated genome data of other species remain largely uncharacterized, previous studies have suggested their involvement in the fish nervous system, intestinal regeneration in sea cucumbers (53), binding of the calcium-dependent matrix because of the presence of N-linked carbohydrate moieties (54), human fibroblast contractility (55), and the development of human brown fat cells (27).Recently, Wei et al. (28) and Park et al. (29) analyzed the X-ray structures of EPDR1 and identified the relationship between the EPDR family and the bacterial LolA lipoprotein family based on the presence of an LolA fold in these proteins.On the basis of its protein structure and in vitro lipid-binding profile, EPDR1 is believed to be involved in lipid binding and transport (28,29).Although the functional role of EPDR1 remains unclear because of its yet-uncharacterized natural binding of lipids, the results of this study provide convincing evidence of the ability of sponge EPDR to bind lipophilic carotenoids.Nevertheless, further studies are required to elucidate the evolution and function of these EPDs, not only to unravel their possible involvement in the coloration of marine organisms but also to reveal the roles of functionally uncharacterized EPDRs in vertebrates and invertebrates.While there is no direct experimental data regarding the ligand-binding site of EPDR1, the previous crystal structure analysis of EPDR1 from human in a PEG-bound form suggested that the ligands bind to the outer concave surface of the β-sheet (28).In contrast, the crystal structure of the native EPD-BCP1 in this study revealed that the two carotenoids bind to the dimer interface in a specific manner, leaving the deep outer clefts of the β-sheets vacant.We note, however, that PEG was used as a precipitant for crystallization, and the concave surface of EPD-BCP1 is rich in hydrophobic residues, like in EPDR1.On the other hand, the entrance into the dimer interface of EPDR1 is blocked mainly by the loop between the β3 and β4 strands, and the surface of the dimer interface is rich in hydrophilic residues compared with that of EPD-BCP1.These findings suggest that the dimer interface of EPDR1 is unsuitable for the binding of the hydrophobic ligands.
Here, the two carotenoids were identified from spectroscopic data as (3S,3 0 S)-AXT (peak P1) and MXT (peak P2).In crustaceans, AXT is a mixture of (3S,3 0 S)-, (3R,3 0 S)-, and (3R,3 0 R)-isomers, which may be produced from the ingested food.In contrast, blue sponge contains only (3S,3 0 S)-AXT.This may be because it is produced from the usual (3R,3 0 R)zeaxanthin by ketolase or from β-carotene by a stereospecific hydroxylase and ketolase, which could be included in blue sponge or in unknown symbiotic bacteria.Notably, the hydroxyl groups of (3S,3 0 S)-AXT and (3R,3 0 R)-zeaxanthin share the same chirality.MXT, which was first isolated from the edible mussel Mytilus edulis, is widely distributed among marine animals (37,38) but not among carotenoid-producing organisms.MXT is produced from the fucoxanthin of brown algae (38).Further studies are warranted to clarify the enzymes and metabolic pathways associated with these carotenoids.
Our X-ray crystal structure analysis of EPD-BCP1 has revealed some striking similarities with β-CR (4) in terms of the structural properties of the bound carotenoids, although these two carotenoproteins are evolutionarily unrelated.Both proteins form heterodimers that bind two carotenoid molecules in a way that two polyene planes are nearly parallel with a minimum distance of 7 Å.In addition, the end-ring rotation angles of AXT in EPD-BCP1 are very close to those found in β-CR, resulting in the 6/6 0 -s-trans conformations of AXT.The DFT calculations in this study have confirmed that the endring rotations alone partially contribute to the red shift of AXT upon binding to EPD-BCP1 (469→494 nm according to the DFT calculations versus experimental values of 478→549 nm).These findings are in good agreement with those of previous theoretical studies on β-CR (45,46,51).In contrast, the end-ring rotations of MXT upon binding to EPD-BCP1 make almost no contribution to the red shift Sponge blue protein belonging to the ependymin family (457→455 nm according to the DFT calculations), regardless of the large rotations of the two end rings, as mentioned before.The influences of the protein environment on the red shifts of AXT and MXT were estimated on the basis of QM1/ QM2/MM geometry optimizations and subsequent excitation energy calculations.Consequently, the calculated λ max of 567 nm for AXT in the protein is comparable to the experimental value of 549 nm, whereas the calculated λ max of 513 nm for MXT is much shorter than the experimental value of 550 nm.Further studies with spectroscopic and theoretical approaches will elucidate the mechanism of the bathochromic shift of MXT upon binding to EPC-BCP1.
Previously, an AXT-binding carotenoprotein in a photooxidative stress-tolerant eukaryotic microalga was identified as the first water-soluble carotenoprotein in eukaryotic plants with an N-terminal signal sequence for cell-surface secretion.EPD-BCP1 is also a water-soluble AXT-and MXT-binding carotenoprotein expressed on the cell surface.Considering the roles of AXT and MXT in providing protection from sunlight and in scavenging singlet oxygen, these proteins may be primarily adapted to localize the carotenoids on the cell surface.Because carotenoid-binding proteins vary widely and have evolved independently across taxonomic groups, further research is required to clarify the diversity, evolution, and functional commonality of the selective binding of different carotenoid species among a wide range of organisms.
Animal material and sponge characterization
Blue sponges were collected from the coral reef off the coast of Okinawa prefecture, with the help of a fish company between 2016 and 2021.Collecting marine blue sponges is not prohibited in Japan.Our study did not use laboratory animals and not needed ethics oversight.Type specimens were deposited into the collection of the National Museum of Nature and Science as a deposit number NSMT-Po-2491 (NSMT).The sponge samples were classified by morphological and genotypic characterization with an expert sponge taxonomist.Briefly, fresh sponge samples were fixed in 95% ethanol solution for morphological observation of spicule type, skeletal formations, and architectural structures.The skeletal structure and scleral features were observed under an optical microscope.The morphological features of the sponge samples were matched to those of the genus Haliclona as described in "Systema Porifera: A Guide to the Classification of Sponges" (30,31).Genomic DNA was extracted from sponge tissues using a DNeasy Tissue Kit (Qiagen) in accordance with the manufacturer's protocol.The Cox1 gene was amplified by PCR using the primers FCo1490 (5 0 -GGTCAACAAATCATAAA-GAYATYGG) and RCo2198 (5 0 -TAAACTTCAGGGTGAC-CAAARAAYCA) (32,33).The resulting PCR products were purified using the QIAquick PCR purification kit (Qiagen) and sequenced by Macrogen.The sequences of the cox1 genes (658 bp) obtained from sponge samples collected in 2016 and 2018 were identical and used to search for sequence similarities.These sequences and those of taxonomically related species were aligned using the ClustalW for phylogenetic analyses (56).Phylogenetic trees were constructed using the maximum-likelihood methods with MEGA X (57).
Purification procedures and determination of peptide sequences
Fresh sponge samples were gently squeezed, and the released blue droplets were collected as crude extracts.About 1 ml of blue extract (Abs 557 = 2.0) was obtained from 1.0 g of sponge tissue after removing excess seawater.Aqueous supernatants were obtained after ultracentrifugation at 100,000g for 2 h.A single aqueous blue fraction was collected by passing through a gel-filtration column (HR100; GE Healthcare) and DEAE Sepharose fast flow column (GE Healthcare) using 50 mM Tris-HCl buffer, pH 7.0.Final purification yields of the blue protein were 50 to 70%.After the blue fraction was concentrated, the purified proteins were separated by SDS-PAGE.The molecular weights of the proteins were estimated using a commercial marker kit (Precision Plus Protein; Bio-Rad).The N-terminal amino acid sequence was determined by the Edman degradation method using a peptide sequencer (PPSQ30; Shimadzu).The formyl groups were deblocked by soaking the transferred membrane overnight in 100 mM HCl solution.Periodic acid-Schiff staining was performed with a commercial staining kit (Merck) in accordance with the manufacturer's instructions.The molecular masses of the purified proteins were determined by gel-filtration chromatography and calibrated with the following molecular standards (Pharmacia): ribonuclease (13.7 kDa), carbonic anhydrase (29 kDa), ovalbumin (44 kDa), conalbumin (75 kDa), aldolase (158 kDa), thyroglobulin (670 kDa), and blue dextran (2000 kDa).
Carotenoid extraction and identification
Carotenoids were extracted from the purified carotenoprotein by the Bligh-Dyer method1 as successive extractions with methanol:chloroform (5:2) by gentle mixing in a tube.After the addition of chloroform and then water (2:3), the organic phase was obtained and evaporated to dryness under nitrogen gas.The extracted carotenoids were completely dissolved in acetone.
Carotenoids were identified using an HPLC photodiode array system (200-700 nm) (L-2455; Hitachi) equipped with a Capcel PAK C18 reversed-phase column (150 × 4.6 mm i.d., 5 μm particle size; Shiseido).The solvent system was a mixture of methanol/water (80/20, v/v, solvent A) and a mixture of acetone/methanol (1/1, v/v, solvent B).The column was eluted at a flow rate of 1.0 ml min −1 with a linear gradient: 0 to 4 min, 75% A:25% B to 50% A:50% B; 4 to 20 min, 50% A:50% B to 100% B; and 20 to 50 min, 100% B. The same column system described previously was used for LC-MS.Positive ion mass spectra were recorded in full scan mode (m/z 70-1000) with an electrospray ionization source and a Q-Exactive Focus Orbitrap LC-MS/MS System (Ther-moFisher Scientific).The interface voltage was set at 4.5 or −3.5 kV.Nitrogen gas was used as nebulizing gas at a flow rate of 1.5 l min −1 .Charged droplets and heat block temperatures were both 200 C.The data of molecular masses in high-resolution LC-MS were analyzed with Compound Discoverer, version 2.1 software (ThermoFisher Scientific). 1 H-NMR (500 MHz) spectra of the peak P1 and peak P2 components of CDCl 3 were measured using a Unity Inova-500 system (Varian).The chirality of AXT was determined using a Sumichiral OA-2000 column (Agilent Technologies) with nhexane/chloroform/ethanol (48:16:1 by volume) as the eluent at a flow rate of 1.0 ml/min (58).(3R,3 0 R)-AXT from red yeast Xanthophyllomyces was used as a control.
Crystallization, X-ray diffraction data collection, and structure determination
The natively purified EPD-BCP1 was crystallized by the sitting drop vapor diffusion method at 23 C, where the inner drops were prepared by mixing 1.0 μl of 9.8 mg/ml protein in 10 mM Tris-HCl (pH 7.5) and 1.0 μl of reservoir solution.The best crystal was grown within 4 weeks with the reservoir solution consisting of 0.1 M Tris-HCl (pH 8.5), 0.2 M MgCl 2 , and 20% (w/v) PEG 8000.The crystal was transferred to the reservoir solution supplemented with 20% ethylene glycol for cryoprotection and flash-frozen in liquid nitrogen.X-ray diffraction data were collected at the beamline BL-5A of KEK Photon Factory at a wavelength of 1.0000 Å at 100 K.The diffraction data were integrated and scaled with iMosflm and Aimless from CCP4 software suite (59), respectively.The phases were determined by molecular replacement using MOLREP (59), where the AlphaFold2 (42) model calculated from the amino acid sequence of the α subunit of EPD-BCP1 was used as a search model.The subsequent model building and structure refinement were iteratively performed with Coot (60) and Refmac5 (61), respectively.The restraint CIF files for AXT and MXT were prepared based on the QM1/QM2/MM geometry-optimized structure.The data collection and refinement statistics are summarized in Table S3, where the Ramachandran analysis was performed with Rampage (62).Structure figures were prepared with PyMol (The PyMOL Molecular Graphics System, version 2.4.0;Schrödinger, LLC) and Avogadro (63).
Preparation of apoprotein and its reconstitution with carotenoids
Apo-EPD-BCP1 was obtained by treating the purified protein with organic solvents diethyl ether/acetone (1/1).AXT and MXT were extracted from the purified EPD-BCP1 as described previously.Canthaxanthin and fucoxanthin were extracted from microalgal AstaP-orange1and blown alga Eisenia sp., respectively (18,44).Briefly, the crude extracted pigments were dissolved in hexane and were separated by silica gel HPTLC with concentrate zone (Merck; catalog no.: 1.13748.0001)and developed with dichloromethane-ethyl acetate-acetone (1:2:1, by volume).The extracted carotenoids were further collected by C18-HPLC.The authenticity of the purified carotenoids was determined based on the absorption spectra obtained using an HPLC photodiode array detector, HPLC retention times, and molecular masses from high-resolution LC-MS analysis in comparison with those of standard compound (18,19,44).Apo-EPD-BCP1 was used for reconstitution assays after removing the organic solvents under an N 2 gas stream.Apoprotein (dissolved in 50 mM Tris-HCl buffer [pH 7.5]) and a small amount of carotenoid solutions (dissolved in acetone) were mixed together and incubated overnight on ice (44).Unbound insoluble carotenoids were removed by centrifugation for 5 min at 15,000g, and the trace amounts of free carotenoids were removed by brief extraction with diethyl ether followed by removing the residual diethyl ether under an N 2 gas stream.Absorption spectra of apo-and holo-EPD-BCP1 were measured using a spectrophotometer (Shimadzu UV-1000).A blank experiment (without protein) did not produce detectable carotenoids by this method.
Construction of a cDNA library and cloning of cDNA encoding EPD-BCP1 Total RNA was extracted with Trizol reagent (Thermo-Fisher Scientific) and reverse transcribed into cDNA, which was used to generate a full-length cDNA library with the SMARTer Pico PCR cDNA Synthesis Kit (Takara Bio) in accordance with the manufacturer's instructions.The cDNA libraries were sequenced using an Illumina HiSeq 2500 system (Illumina).FASTQ files were imported to the CLC Genomics Workbench (QIAGEN), and de novo sequence assembly was performed.Approximately, 30,000 contigs after the de novo assembly were generated by the CLC Genomics Workbench.To confirm the nucleotide sequence of cDNAs encoding the N-terminal sequences of the EPD-BCP1α and EPD-BCP1β, the cDNAs were amplified by PCR using the cDNA library as a template.The amplified PCR product was sequenced, and the full-length cDNA sequence was confirmed by juxtaposing sequences of the 5 0 and 3 0 portions.The N-terminal signal peptide sequences were predicted by the SignalIP-5.0Server (http://www.cbs.dtu.dk/services/SignalP/).
Phylogenetic analysis of EPD-BCP1
A phylogenetic tree of EPD-BCP1 was initially aligned using ClustalW and generated by maximum likelihood tree with RAxML using the best model (GTR-GAMMA) (64).The accession numbers of the protein sequences used for phylogenetic analysis are listed in supporting information 1.Alignment figures were prepared with ESPript 3.0 (65).Tree was drawn with Itol, version 6 (https://itol.embl.de/)(66).
Data availability
Type specimens were deposited into the collection of the NSMT (accession number: NSMT-Po-2491).Accession numbers and source data are presented in the main text and supporting information.All data used to support the findings of this study have been included in the article and supporting information or made available through the National Center for Biotechnology Information.The sponge proteins are available from corresponding author upon reasonable request.The atomic coordinates and structure factors for EPD-BCP1 have been deposited in the Protein Data Bank under the accession number of 8I34.
Figure 1 .
Figure 1.Purification of blue protein from blue sponges.A, photographs of blue sponges collected in 2018 (left panel) and 2016 (right panel).Scale bar represents 3.0 cm.B, change in body color before (left panel) and after (right panel) squeezing crude extracts from the sponge body.Scale bar represents 1.0 cm.C, absorption spectrum of crude extract obtained from a blue sponge collected in 2018.D, single elution peak detected with a photodiode array detector following gel filtration (left panel).SDS-PAGE of purified blue protein (right panel).E, absorption spectrum of the purified blue protein in Tris-HCl buffer (pH 7.5).Color of the purified protein is shown in the inset.
Figure 2 .
Figure 2. Primary structure, carotenoid determination, and localization of the purified blue protein.A, alignment of deduced amino acid sequences of EPD-BCP1α and EPD-BCP1β.Protein sequences were aligned using ClustalW.Predicted N-terminal signal sequence detected by SignalP is shown in red.N-terminal amino acid sequence obtained from purified blue protein is shown in italics.Putative N-glycosylation sites are highlighted in yellow.Four conserved cysteine residues are shown in green.Identical amino acid residues are indicated by asterisks, and similar amino acid residues are indicated by dots.B, HPLC elution profiles of bound carotenoids (upper panel) and absorption spectra (lower panel) of the peaks P1 and P2.C, structures of (3R,3 0 R)astaxanthin (AXT) and mytiloxanthin (MXT).D, PAS staining of purified blue protein for detecting protein glycosylation.E, localization of blue pigment.Dissected surface, a sliced specimen, magnified surface, and microscopic observation of sponge blue cells around the skeleton (inset: magnification of the cells).Scale bar represents 1 cm (orange), 1 mm (red), and 25 μm (black).BCP1, blue carotenoprotein-1; EPD, ependymin; Periodic acid-Schiff.
Figure 3 .
Figure 3. Crystal structure of EPD-BCP.A, overall structure of the heterodimer, with αand β-subunits shown in different colors.N and C termini are labeled as N and C, respectively.AXT and MXT are shown with the ball-and-stick model and labeled.N-glycans and the linking asparagine residues are shown with the stick model and labeled.Pairs of cysteine residues forming disulfide bonds are shown with the ball-and-stick model.B, EPD-BCP1 and β-CR are superposed in a way that the pseudo-twofold axis of each heterodimer is in the center of the figure and perpendicular to the paper.In the right panel, only β-CR is rotated by 90 clockwise along the x-axis.C, comparison of pigment configuration between EPD-BCP1 and β-CR (PDB ID: 1GKA).Left panel is the view of the rectangular window in the right panel of B. Two AXT molecules observed in β-CR are labeled as AXT1 and AXT2 as in Ref. (4).The pseudo-twofold axis found in AXT1-AXT2 in β-CR is shown.β-CR, β-crustacyanin; AXT, astaxanthin; BCP, blue carotenoprotein; EPD, ependymin; MXT, mytiloxanthin.
Figure 4 .
Figure 4. Protein environment surrounding two carotenoids.A, residues in between AXT and MXT.The aromatic ring of F144(β) exceptionally protrudes over AXT.B, residues within 4 Å of AXT.The atoms in carboxylate groups labeled with red asterisks are assumed to be protonated.C, residues within 4 Å of MXT.AXT is omitted for clarity.In B and C, some labels displayed in A are also omitted for clarity.In A-C, hydrogen bonds are shown with gray dotted lines.Diagonally positioned residues (e.g., L44(α) and V47(β)) are equivalent to each other.AXT, astaxanthin; MXT, mytiloxanthin.
Figure 5 .
Figure 5. Reconstitution experiments.A, reconstitution of apoprotein with detached orange carotenoids (detached car).Spectra of the reconstituted protein (blue line) obtained after mixing the apoprotein (gray line) with the detached orange carotenoids dissolved in acetone (orange dotted line); purified protein used in the reconstitution study (blue dotted line).B, spectra of holo-EPD-BCP1 reconstituted with astaxanthin (AXT) and mytiloxanthin (MXT) (blue line), AXT only (red line), and MXT only (purple line).C, absorption spectrum of holo-EPD-BCP1 reconstituted with canthaxanthin (green line), and canthaxanthin dissolved in acetone (green dots).Structure of canthaxanthin is indicated above the graph.D, absorption spectrum of holo-EPD-BCP1 reconstituted with fucoxanthin (brown line), and fucoxanthin dissolved in acetone (brown dots).Blank experiment (without protein) did not produce detectable carotenoids by this method.Structure of fucoxanthin is indicated above the graph.BCP1, blue carotenoprotein-1; EPD, ependymin. | 2023-07-30T15:17:24.984Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "1dd44c63db62d40de66cb77c62b1260f5f09cf86",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925823021385/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20a8ebe1d1c78499d6ba91cb88a90d76f5ed9f06",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221836049 | pes2o/s2orc | v3-fos-license | A model of rotating convection in stellar and planetary interiors: II -- gravito-inertial wave generation
Gravito-inertial waves are excited at the interface of convective and radiative regions and by the Reynolds stresses in the bulk of the convection zones of rotating stars and planets. Such waves have notable asteroseismic signatures in the frequency spectra of rotating stars, particularly among rapidly rotating early-type stars, which provides a means of probing their internal structure and dynamics. They can also transport angular momentum, chemical species, and energy from the excitation region to where they dissipate in radiative regions. To estimate the excitation and convective parameter dependence of the amplitude of those waves, a monomodal model for stellar and planetary convection as described in Paper I is employed, which provides the magnitude of the rms convective velocity as a function of rotation rate. With this convection model, two channels for wave driving are considered: excitation at a boundary between convectively stable and unstable regions and excitation due to Reynolds-stresses. Parameter regimes are found where the sub-inertial waves may carry a significant energy flux, depending upon the convective Rossby number, the interface stiffness, and the wave frequency. The super-inertial waves can also be enhanced, but only for convective Rossby numbers near unity. Interfacially excited waves have a peak energy flux near the lower cutoff frequency when the convective Rossby number of the flows that excite them are below a critical Rossby number that depends upon the stiffness of the interface, whereas that flux decreases when the convective Rossby number is larger than this critical Rossby number.
Therefore, the excitation mechanisms and the resulting amplitudes and frequency spectrum need to be understood in an astrophysical context. In this work, the focus is on the stochastic excitation of GIWs at convective-radiative interfaces and in the bulk of convective regions by turbulent Reynolds stresses. Indeed, small-scale eddies or large-scale turbulent structures such as convective plumes are able to perturb the interface between radiative and convective zones, leading to the excitations of IGW and GIW packets. The influence of small-scale eddies has been semi-analytically been modelled for IGWs by Press (1981) and Zahn et al. (1997) for example, whereas the impact of larger-scale flows modelled analytically as collections of plumes on IGWs has been considered by Schatzman (1993) and Pinçon et al. (2016) for example. These excitation mechanisms have also been observed in 2D and 3D local and global numerical simulations (e.g., Hurlburt et al. 1986;Browning et al. 2004;Dintrans et al. 2005;Kiraga et al. 2005;Rogers & Glatzmaier 2005;Rogers et al. 2006Rogers et al. , 2013Alvan et al. 2014Alvan et al. , 2015Augustson et al. 2016;Edelmann et al. 2019). In addition, turbulent Reynolds stresses in the bulk of convective regions also contribute to the generation of IGWs both in late-type stars (e.g., Belkacem et al. 2009b) and in early-type stars (e.g., Samadi et al. 2010;Shiode et al. 2013) through their coupling to the evanescent tail of the IGWs in the convective zone (e.g., Lecoanet & Quataert 2013). This distributed effect due to Reynolds stresses has been studied in the context of laboratory experiments of the temperature stratified convective to non-convective transition of water as seen in Le ; Lecoanet et al. (2015), and Couston et al. (2018), finding that the Reynolds stresses are the dominant wave excitation mechanism in that system. However, most of the above mentioned studies have neglected the action of rotation both on the turbulent convective flows (see e.g., Julien et al. 2006;Davidson 2013;Brun et al. 2017;Alexakis & Biferale 2018, ;and references therein) and on the IGWs that become GIWs. Belkacem et al. (2009a) have presented a formalism for the study of the stochastic excitation of IGWs in rotating stars, although only in the case of slowly rotating stars. Building upon this approach, Mathis et al. (2014) demonstrated how the nature of the couplings between the GIWs and the turbulent Reynolds stresses could be strongly affected by the Coriolis acceleration. On one hand, those waves with frequencies above twice the rotation rate, super-inertial waves, are evanescent in stellar convective regions, and thus only weakly couple to the Reynolds stresses away from the convectiveradiative transition. On the other hand, those waves with frequencies below twice the rotation rate, subinertial waves, become propagative inertial waves in stellar convection zones and are intrinsically coupled with the turbulent convective flows throughout the convection zone. The reader is referred to the detailed discussion of this in Mathis et al. (2014). Moreover, turbulent structures become strongly anisotropic with global alignment with the rotation axis while the efficiency of the heat transfer between different scales is globally decreased (e.g., Sen et al. 2012;Julien et al. 2012). Additionally, turbulent convective structures can be understood as being a combination of inertial waves in the asymptotic regime of rapid rotation (e.g., Davidson 2013;Clark di Leoni et al. 2014). These mechanisms can be very important in stars since late-type stars are rapidly rotating during their pre-main-sequence phase (e.g., Gallet & Bouvier 2015), while early-type stars generally have high rotation rates throughout their evolution (e.g., Maeder & Meynet 2000, ;and references therein). Yet, Mathis et al. (2014) did not provide a quantitative estimate of the GIW amplitudes, frequency spectrum, and induced transport of momentum and chemicals due to the lack of a prescription for rotating turbulent convection. Stevenson (1979) and Augustson & Mathis (2019) (hereafter Paper I) have derived mixing-length based scaling laws for the primary properties of small-scale convective eddies in rotating stellar and planetary convection (e.g., their rms velocity, horizontal convective scale, and the local superadiabaticity). Direct nonlinear f-plane numerical simulations of Käpylä et al. (2005) and Barker et al. (2014) have shown that these prescriptions appear to hold up well in polar regions. Thus, the convective scaling laws are employed to provide a first quantitative analytical estimate of the amplitudes and frequencies of stochastically excited GIWs. Indeed, such a model permits the action of (rapid) rotation to be taken into account both for the propagation of GIWs and for the nature of the convection. The obtained formalism constitutes a generalization of the work of Lecoanet & Quataert (2013) for pure IGWs in the nonrotating case. This formalism can be implemented into stellar evolution and oscillation codes to explore the properties and consequences of GIWs across the Hertzsprung-Russell diagram. Therefore, this represents a step toward building a coherent theoretical framework to study the seismology of rotating stars and the wave-induced transport in their interiors, working synergistically with the ongoing development of numerical simulations and laboratory experiments. For instance, see the recent laboratory experiments by Rodda et al. (2018).
Outline
The model of convection derived in Paper I is employed to estimate the GIW energy flux into the stable region adjacent to convective zones. The general framework of the convection model is briefly summarized in §2. GIWs and their excitation mechanisms are briefly reviewed in §3. Following the arguments of Press (1981) and André et al. (2017), the interfacial generation of GIWs and their associated energy flux is assessed in §4. Subsequently, in §5, an estimate is given for the energy flux of GIWs excited by Reynolds stresses using the convection model. A summary of the results and perspectives are presented in §6.
Hypotheses and Localization
A self-consistent and yet computationally tractable treatment of stellar and planetary convection has been a long sought goal, with many such models having been employed in evolution models. One such model based upon a variational principle for the maximization of the heat flux (Howard 1963) and a turbulent closure assumption for the velocity amplitude (Stevenson 1979) has been expanded upon in Paper I (Augustson & Mathis 2019). In the context of GIW excitation, one needs to ascertain the amplitude of the velocity field that excites the waves both through Reynolds stresses acting throughout the bulk of the convection zone on both the evanescent super-inertial waves and propagating inertial waves and also exciting them directly through thermal buoyancy in the region of convective penetration.
To that end, a local region is considered as in Paper I, where a small 3D section of the spherical geometry is the focus of the analysis. This region covers a portion of both the convectively stable and unstable zones as shown in Figure 1, where the set up is configured for a low mass star with an external convective envelope. One may exchange these regions when considering a more massive star with a convective core. In this local frame, there is an angle between the effective gravity g eff and the local rotation vector that is equivalent to the colatitude θ. The Cartesian coordinates are defined such that the vertical direction z is anti-aligned with the gravity vector, the horizontal direction y lies in the meridional plane and points toward the north pole defined by the rotation vector, the horizontal direction x is equivalent to the azimuthal direction. The angle ψ in the horizontal plane defines the direction of horizontal wave propagation χ.
While the details of the derivation of the heat-flux maximized rotating convection model may be found in Paper I, it is necessary to recall a few of the relevant results as they are applied in subsequent sections. The heuristic model is local such that the length scales of the flow are much smaller than either the density or pressure scale heights, thus ignoring the global-scale flows, which will be the focus of a forthcoming paper. The dynamics are further considered to be in the Boussinesq limit. This localization of the convection therefore consists of an infinite layer of a nearly incompressible fluid with a small thermal expansion coefficient Figure 1. Coordinate system adopted for the models of rotating convection and gravity-wave excitation, showing (a) the global geometry and f-plane localization, (b) the f-plane geometry, and (c) the direction χ in the horizontal plane of the f-plane. The orange tones denote a convective region and the yellow tones denote a stable region for late-type stars, and vice versa in early-type stars. α T = −∂ ln ρ/∂T | P that is confined between two infinite impenetrable boundaries differing in temperature by ∆T = T (z 2 )−T (z c ), with the lower boundary located at z 1 and the upper boundary at z 2 . In this model, it is assumed for this model that T (z 2 ) < T (z c ) and that the boundaries are separated by a distance 0 = z 2 − z c , as in Figure 1, where z c is the point of transition between the convectively stable and unstable regions.
The recent motivation behind the development of the convection model arose from the numerical work of Käpylä et al. (2005) and Barker et al. (2014), where it was found that the rotational scaling of the amplitude of the temperature, its gradient, and the velocity field compare well with those derived in Stevenson (1979). Moreover, the experimental work of Townsend (1962) and the analysis of Howard (1963) have shown that a heat-flux maximization principle provides a sound basis for the description of Rayleigh-Bénard convection, leading to its use here. Thus, two hypotheses underlie the convection model: the Malkus conjecture that the convection arranges itself to maximize the heat flux and that the nonlinear velocity field can be characterized by the dispersion relationship of the linearized dynamics. Constructing the model of rotating convection then consists of three steps: deriving a dispersion relationship that links the normalized growth rateŝ = s/N * to q = N * ,0 /N * , which is the ratio of superadiabaticity of the nonrotating case to that of the rotating case (where N 2 * = |gα T β| is the absolute value of the square of the Brunt-Väisälä frequency), and to the normalized wavevector ξ 3 = k 2 /k 2 z , maximizing the heat flux with respect to ξ, and assuming an invariant maximum heat flux that then closes this three variable system.
Dispersion Relationship and Flux Maximization
For rotating convection, one may show that for impenetrable and stress-free boundary conditions the solutions of the equations of motion are periodic in the horizontal, sinusoidal in the vertical, and exponential in time, e.g. v z = v sin [k z (z − z c )] exp (ik ⊥ · r + st), where k ⊥ is the horizontal wavevector, s is the growth rate, r is the local coordinate vector, and v is a constant velocity amplitude. To satisfy the impenetrable, stress-free, and fixed temperature boundary conditions, it is required that the vertical wavenumber be k z = nπ/ 0 . The introduction of this solution into the reduced linearized equation of motion yields the following dispersion relationship that relates s to the wavevector k as s + κk 2 s + νk 2 2 k 2 + gα T βk 2 ⊥ s + νk 2 + 4 (Ω·k) 2 s + κk 2 = 0. (1) This equation may be nondimensionalized by dividing through by the appropriate powers of N * and k z , leading to the definition of additional quantitieŝ Introducing these into the dispersion relationship yields with 4(Ω · k) 2 /N 2 where Ω 0 is the bulk rotation rate of the system. The characteristic velocity v 0 of the nonrotating and nondiffusive case is derived from the growth rate and maximizing wavevector in that case, with s 2 0 = 3/5|g 0 α T β 0 | = (3/5) N 2 * ,0 , β 0 being the thermal gradient, g 0 being the effective gravity, and where Thus, the definition of the convective Rossby number Ro c is which implies that The superadiabaticity for this system is = H P β/T , meaning that N 2 * = |gα T T /H P |, where H P is the pressure scale height. The potential temperature gradient in the nonrotating and nondiffusive case is ascertained from the Malkus-Howard turbulence model (Malkus 1954;Howard 1963), which yields a value of N * ,0 . It is also useful to compare the timescales relative to N * ,0 . Letting the ratio of superadiabaticities be all parametric quantities have the following equivalencies So, the dispersion relationship (Equation 3) and the heat flux may be written as where F 0 = ρ c P N 3 * ,0 / gα T k 2 z . To ascertain the scaling of the superadiabaticity, the velocity, and the horizontal wavevector with rotation and diffusion, an additional assumption is made to close the system. This assumption is that the maximum heat flux is invariant to any parameters: max [F ] = max [F ] 0 so the heat flux is equal to the maximum value max [F ] 0 obtained for the nonrotating case, which fits with the assumption that the energy generation of the star is not strongly effected by rotation.
In the case of planetary and stellar interiors, the viscous damping timescale is generally longer than the convective overturning timescale (e.g., V 0 N * ,0 ). Thus, the maximized heat flux invariance is much simpler to treat. In particular, the heat flux invariance condition under this assumption is then implying thatŝ wheres = 2 1/3 3 1/2 5 −5/6 and max [F ] 0 = 6/25 3/5F 0 follows from the definition of the flux and the maximizing wavevector used to define v 0 above in Equation 5. a = kx/kz Maximizing horizontal wavevector kz = π/ 0 Maximizing vertical wavevector Velocity of the nonrotating case V0 = νk 2 z /N * ,0 Normalized viscosity q = N * ,0/N * Ratio of buoyancy timescales Normalized wavevector Table 1. Frequently used symbols in the convection model.
Rotational Scaling of Superadiabaticity, Velocity, and Wavevector
The assumption of this convection model is that the magnitude of the velocity is defined as the ratio of the maximizing growth rate and wavevector. With the above approximation, the velocity amplitude can be defined relative to the nondiffusive and nonrotating case scales without a loss of generality as So only the maximizing wavevector needs to be found in order to ascertain the relative velocity amplitude. For reference, the symbols that will be frequently used from this section are listed in Table 1. With all the equations in hand, the horizontal wavevector may be seen to be the roots of the fourteenth-order polynomial, whereas the superadiabaticity is defined as For the study of adiabatic GIWs, the nondiffusive model is employed where V 0 → 0 and K 0 → 0, leading to and 0 = 25π 2 Ro 2 cs 2 ξ 5 + 6 cos 2 θ 25π 2 Ro 2 cs 2 (ξ 3 − 1) .
So, to ascertain the maximizing wavenumber, and thus the velocity and superadiabaticity, of the motions that maximize the heat flux one supplies the colatitude θ and the convective Rossby number of the flow Ro c . Now that the quantities related to the convection model have been defined, the impact of rotation on the convective excitation of gravito-inertial waves can be characterized.
GRAVITO-INERTIAL WAVES
When examining the excitation of GIWs, the region of interest is near the radiative-convective interface. As a first step toward a coherent global treatment of GIW excitation, the forthcoming analysis will share the same Cartesian geometry as the convection model, which is depicted in Figure 1 where the stable region is now also considered. For compactness, one may introduce the two components of the rotation vector along the vertical direction z and the latitudinal direction y as f = 2Ω 0 cos θ, and f s = 2Ω 0 sin θ.
As depicted in Figure 1, the waves to be considered propagate along a direction with an angle ψ in the horizontal x − y plane, the latitudinal component of the rotation vector has two images in this plane with f sc = 2Ω 0 sin θ cos ψ, and f ss = 2Ω 0 sin θ sin ψ.
In this analysis, both components of the rotation vector are kept in the equations of motion, as opposed to the so-called traditional approximation that considers only its vertical component in the Coriolis acceleration in order to yield a separable dynamical system. However, in the near-inertial frequency range, nontraditional effects act as a singular perturbation. Specifically, the phase of the wave has a vertical dependence that is absent under the traditional approximation. Also, as shown in Gerkema & Shrira (2005), when considering a non-constant stratification, sub-inertial GIWs can be trapped in regions of weak stratification. This behavior does not arise in the traditional approximation.
The near-inertial wave dynamics are quite sensitive to variations in the effective Coriolis parameters f and f s , which could arise from a locally strong vortex. For instance, the low Rossby number, quasi-geostrophic flows that likely exist deep in stellar interiors and that impinge upon stable regions could transform a near-inertial wave from the super-inertial regime into the sub-inertial regime. The wave would suddenly find itself trapped in a waveguide, leading to a strong interaction between the near-inertial waves and large-scale motions. Such notions will be considered in a forthcoming investigation of global-scale dynamics.
Following Gerkema & Shrira (2005) and Mathis et al. (2014), the linearized equations of motion used to construct the convection model above are extended into the radiative region to study the coupling of the convection with both the gravity and inertial waves present in both regions. Specifically, these equations are Boussinesq and in the Emden-Cowling approximation (Emden 1907;Cowling 1941), where the gravitational potential perturbations are ignored, with where the buoyancy is b = −g eff ρ (r, t) /ρ 0 . One may eliminate the pressure, buoyancy and the horizontal velocities to yield an equation of motion for the vertical component of the velocity as where ∇ 2 ⊥ is the horizontal Laplacian and N 2 R (z) is the Brunt-Väisälä frequency in the radiative zone. If one then further considers monochromatic GIWs with a frequency ω that propagates along the direction characterized by the angle ψ in the horizontal plane and a coordinate χ = x cos ψ + y sin ψ along that direction as in Figure 1 with a solution of the form v z (r, t) = w (r) e iωt , one obtains the Poincaré equation for the GIWs Nominally, this is a nonseparable equation. However, it may be transformed when assuming the following spatial form of the solution as in Gerkema & Shrira (2005), where k ⊥ is the wavevector along χ, Ro w = ω/2Ω 0 is the wave Rossby number, and is the phase shift linking the horizontal and vertical directions. Yet the above form of the solution leads to a homogeneous Schrödinger-like equation in the vertical coordinate as where (31) Similar to the nonrotating case, this permits the use of the method of vertical modes to find the modal functions w j that satisfy the appropriate boundary conditions. Indeed, it can be shown that solutions of the form of Equation 28 constitute an orthogonal and complete basis (Gerkema & Shrira 2005).
In convectively stable regions where rotation is important, GIWs may propagate if their frequency falls within the range between ω − and ω + , whereas in convection zones one has that where the local wave Rossby number is At the pole in a convectively stable region, this implies that the frequency must be between 2Ω 0 and N R for the wave to propagate, where the Brunt-Väisälä frequency is typically much larger than the rotational frequency in the radiative core of late-type stars and the radiative envelope of early-type stars (e.g., Aerts et al. 2010). More generally, at other latitudes, the hierarchy of extremal propagative wave frequencies satisfy the inequality ω − < 2Ω 0 < N R < ω + . As these waves propagate, the Brunt-Väisälä frequency varies, for instance it becomes effectively zero in the convection zone. This implies that waves in the frequency range ω < 2Ω 0 are classified as sub-inertial GIWs in stable regions, becoming pure inertial waves in convective regions. Waves in the frequency range ω ≥ 2Ω 0 are classified as superinertial GIWs in the stable region, which in contrast to sub-inertial GIWs become evanescent in the convective region. Figure 2 in Mathis et al. (2014) provides a concise visual reference of the hierarchy of frequencies, to which the reader is referred.
INTERFACIAL GRAVITO-INERTIAL WAVE ENERGY FLUX ESTIMATES
There are many models for estimating the magnitude of the gravity wave energy flux arising from the waves excited by convective flows. One of the first and most straightforward of such estimates is described in Press (1981, hereafter P81), where the wave energy flux across an interface connecting a convective region to a stable zone is computed by matching their respective pressure perturbations at that interface. Because the wave excitation occurs at an interface, the pressure perturbations are more important than the Reynolds stresses of the flows. What is more, the model assumes that the convective source is a delta function in Fourier space. So, the model permits only a single horizontal spatial scale 2π/k c and a single time scale for the convection 2π/ω c that also selects the depth of the transitional interface where N R (r) = ω c for gravity waves, where ω c = ω 0 / √ ξ with ω 0 = 2πv 0 / 0 , which lends itself well to the above convection model. This approach yields a wave energy flux proportional to the product of the convective kinetic energy flux and the ratio of the wave frequency to the Brunt-Väisälä frequency in the nonrotating case for gravity waves.
The convective model established above captures some aspects of the influence of rotation on the convective flows. Therefore, the impact of the Coriolis force on the stochastic excitation of GIWs can be evaluated. In this context, recent work has established an estimate of the GIW energy flux (André et al. 2017). It can be used to estimate the rotational scaling of the amplitude of the wave energy flux arising from the modified properties of the convective driving. From Equation 61 of André et al. (2017), the vertical GIW energy flux can be computed from the horizontal average of the product of the vertical velocity and pressure perturbation that, given the linearization of the Boussinesq equations for monochromatic waves propagating in a selected horizontal direction, can be evaluated to be where v w is the magnitude of the vertical velocity of the wave. Moreover, the solution for the vertical velocity implies that the dispersion relationship is where f and f ss are defined above in Equation 20. Note that a reference table is given to help identify the many parameters in this section (Table 2). Following P81, further assumptions are necessary to complete the estimate of the wave energy flux. The convection is turbulent. So the fluctuating part of the velocity field is of the same order of magnitude as the convective eddy turnover velocity v ≈ ω c /k c , which implies that convective pressure perturbations are approximately P c = ρ 0 v 2 . Assuming that the pressure is continuous across the interface between the convectively stable and unstable regions, the horizontally-averaged pressure perturbations of the propagating waves excited at the interface must then be equal to the turbulent pressure on the convective side of the interface. Those pressure perturbations follow from the solution for the vertical velocity and the nondiffusive Boussinesq equations (André et al. 2017). For plane wave solutions, the magnitude of those perturbations can then be written as Note however, the pressure matching condition fails for these modes at the pole for sub-inertial waves (ω → f ). The reason is that the propagation domain of subinertial GIWs excludes the pole and it becomes increasingly concentrated toward the equator for faster rotation rates (e.g., Dintrans & Rieutord 2000;Prat et al. 2016). Using the dispersion relationship, and equating the two pressures, yields the following equation for the vertical wave velocity Therefore, the wave energy flux density becomes Flows in a gravitationally stratified convective medium tend to have an extent in the direction of gravity that is much larger than their extent in the transverse directions. Therefore, the horizontal wavenumber of the convective flows is much greater than the vertical wavenumber. This implies that k ⊥,c ≈ ω c /v. For efficient wave excitation, the frequency of the wave needs to be close to the source frequency (Press 1981;Lecoanet & Quataert 2013), which means that the horizontal scale of the waves will be similar to that of the convection. More generally, there will be a distribution of excitation efficiency as a function of the wave frequency ω, which may be peaked near the convective overturning frequency ω c . However, since this distribution is unknown, the full frequency dependence is retained. This assumption simplifies the wave energy flux to . (40) In this case, the nonrotating wave energy flux estimate found in P81 can be recovered when letting Ω 0 → 0 as Finally, taking the ratio of the two energy fluxes to better isolate the changes induced by rotation, assuming that the Brunt-Väisälä frequency is not directly impacted by rotation, and at a fixed wave frequency ω, one has that To make this a bit more parametrically tractable, one can normalize the wave frequency as σ = ω/N R , and cast the rotational terms into a product of the stiffness of the transition S = N R /N 0 , with the convective Rossby number of the convection zone as defined above in §2 with Equations 8 and 6. Doing so yields where Ro w = ω/2Ω 0 = 5πσRo c S/ √ 6 and the wave Rossby number is Ro w = Ro w / sin 2 θ sin 2 ψ + cos 2 θ. This is depicted in Figures 2 and 3, where the colored region exhibits the magnitude of the logarithm of the energy flux ratio and the energy flux itself. An interfacial stiffness of S = 10 3 is chosen as it is a rough estimate of the potential stiffness in most stars, being the ratio of the buoyancy time-scale in the stable region to the convective overturning time. The choice of latitude determines the width of the frequency band of sub-inertial waves, where it is a minimum at the pole and maximum near the equator. This is due to the presence of a critical latitude of the gravito-inertial waves, where sub-inertial waves become evanescent (cos 2 θ c = Ro w 2 ). The direction of ψ = ±π/2 is chosen as it represents the maximum value of the energy flux ratio for the choice of other parameters and represents the waves traveling toward either of the poles as the energy flux ratio is an even parity function of the horizontal direction. Specifically, the poleward wave energy flux ratio is greater than the other extremal choice of the prograde or retrograde wave energy flux ratios. In particular, given the range of ω ± , there are no sub-inertial waves in the prograde or retrograde propagation case (ψ = {0, π}, respectively), whereas the super-inertial waves may still propagate with roughly the same frequency range. The white region corresponds to the domain of evanescent waves for a given convective Rossby number with frequencies below the lower cut-off frequency (σ − , dashed red line) for propagating GIWs. At frequencies above this threshold there is a frequency dependence of the energy flux ratio until reaching the upper cut-off where σ = ω/N R = 1, which arises due to the domain of validity when comparing GIW to gravity wave energy fluxes. Indeed, gravity waves may propagate if ω < N R , whereas super-inertial GIWs may propagate even when N R < ω < ω + . The transition between super-inertial and sub-inertial waves is demarked with the dashed blue line, with super-inertial waves for Ro w > 1 and subinertial waves for Ro w < 1. Here, interfacially-excited super-inertial waves exhibit both a frequency and convective Rossby number dependence. Specifically, the wave energy flux decreases algebraically with frequency at a fixed convective Rossby number and have a reduced energy flux for convective Rossby numbers below unity. The interfacially-excited sub-inertial waves possess a small frequency domain at a fixed convective Rossby number over which they are propagative. The sub-inertial wave energy flux increases with decreasing convective Rossby number until a critical convective Rossby number Ro c,crit = √ 6/ (5πS) as depicted by the vertical dashed orange line in Figure 2. Below this critical convective Rossby number, the sub-inertial wave energy flux decreases and their frequency domain is further restricted until it vanishes entirely and there are no propagative super-inertial waves . The effect of the stiffness is to lower (raise) the value of the critical convective Rossby number for larger (smaller) values of S, which corresponds to the ratio of the buoyancy timescale in the radiative zone to the convective overturn- ing time. This may have important consequences for the wave-induced transport of angular momentum during the evolution of rotating stars. In particular, the convective Rossby number can vary by several orders of magnitude over a star's evolution from the PMS to its ultimate demise (e.g. Landin et al. 2010;Mathis et al. 2016;Charbonnel et al. 2017). Moreover, it can vary internally as a function of radius due to the local amplitude of the convective velocity and due to transport processes, angular momentum loss through winds, and structural changes that modify the local rotation rate . Figure 4 presents the variation of the convective Rossby number at the base of the convective envelope of low-mass stars (from 0.7 to 1.5 M ) throughout their evolution. These convective Rossby numbers have been calculated using grids of stellar models that take into account rotation computed with the STAREVOL code (Siess et al. 2000;Palacios et al. 2003;Decressin et al. 2009;Amard et al. 2016). The details of the micro-and macro-physics used for these grids are described in Amard et al. (2019). The dot-dashed purple line provides the value of the critical convective Rossby number (Ro c,crit = √ 6/ (5πS), shown here for S = 10 3 , for which an increase of the interfacial excitation of GIWs can be expected. For all the stars considered here, which have a median initial rotation (i.e. 4.5 days), this should happen during their PMS. The flux ratio F z /F 0 integrated over latitude θ, propagation direction ψ, and frequency is shown in Figure 5. This illustrates the general rotational trend of the interfacial flux, namely that it decreases with increasing rotation rate. However, there is a peak at a rotational frequency that depends upon the choice of stiffness S and the Rossby number of the convection at the breakup velocity Ro b . Thus, for stars with a modest rotation rate below approximately 0.2Ω b , the interfacial or pressure-driven GIW wave flux could play a role in transport processes that is at least as important as the transport by IGWs. Yet, for more rapidly rotating stars, this flux becomes fairly negligible due primarily to the reduction in the convective velocity amplitudes. Nevertheless, given the complex and nonanalytic form of the full integral, the exploration of the parameter dependence of this peak will be left for future work. As a means of comparison, consider the spherical Couette flow laboratory experiments of Hoff et al. (2016), where it is found that the kinetic energy of the dominant inertial mode increases with decreasing wave Rossby number. Below a critical wave Rossby number this leads to a wave breaking and an increase of small-scale structures at a critical Rossby number, which may be similar to the large increase of wave energy flux for sub-inertial waves below the critical convective Rossby number described above.
REYNOLDS STRESS CONTRIBUTIONS TO GIW AMPLITUDES
As a means of comparison, the amplitude and the wave energy flux of the GIWs may be computed exactly when using the convection model presented earlier, where the impact of rotation on the waves is treated coherently.
In a means similar to Goldreich & Kumar (1990) and Lecoanet & Quataert (2013), although with a greater degree of computational complexity, one may derive the wave amplitudes for GIWs in a f-plane. As seen in Mathis et al. (2014), one must first find solutions to the homogeneous Poincaré equation for the GIWs and then use linear combinations of those solutions to construct solutions to the forced equation in the convection zone. These equations result from writing the linearized equations of motion in a f-plane as a single equation for the vertical velocity W as where S is the convective source term described in detail below. Note that the thermal sources derived in Samadi & Goupil (2001) have been neglected here as in Mathis et al. (2014), for they have been found to be comparatively small for gravity waves when compared to Reynolds stresses (Belkacem et al. 2009b). In addition, the damping mechanisms (i.e. the radiative damping and the damping due to convection-wave interactions) are neglected here. As pointed out in Samadi, R. et al. (2015), the value of the amplitude of stochastically excited waves is proportional to the ratio of the energy injection rate that measures the efficiency of the couplings of turbulent motions with waves and of the damping. The focus of this work is on the energy injection rate while getting a coherent treatment of the turbulent damping of waves in rotating stars will be considered in forthcoming work.
In the stable region, where S is assumed to vanish, it can be shown that if one follows the methodology of constructing normal modes as in Gerkema & Shrira (2005) then the solutions of the homogeneous Poincaré equation for GIWs may be expanded as w = w(χ, z)e iωt . The horizontal coordinate χ = cos ψx + sin ψy corresponds to the distance along the direction of the wave propagation with an angle ψ in the horizontal plane as seen in Gerkema & Shrira (2005) and Figure 1 of Mathis et al. (2014). Therefore, as before, the solution of the forced Poincaré equation for GIWs in the convection zone may be expanded as where ω is the chosen frequency, with k n being the sequence of eigenvalues associated with it, the ψ n are the eigenmodes of the reduced Poincaré equation (see Equation 30), and A n is its amplitude. Technically, the full velocity field would be an integral over all frequencies and the sum over modes associated with each frequency. However, for simplicity, this discussion will at first focus on a single frequency taken to be within the band of propagative frequencies. Substituting this into Equation 44, yields an equation for the mode amplitudes given a source function Noting that the second term is simply the homogeneous equation, it vanishes, leaving × ∂ zz ψ n + 2ik n δ∂ z ψ n − k 2 n δ 2 + 1 ψ n × e ikn(χ+δz)+iωt = ∂ t S. (48)
Utilizing Equation 30, this becomes
Assuming homogeneous Dirichlet boundary conditions on ψ n and that the change in the amplitudes at infinity are zero, with an initial condition of being zero, this can be integrated against a single conjugate mode of index m to see that the constant amplitude is where the normalization c n follows from the orthogonality condition on the ψ n , where L = z 2 − z 1 is the depth of the domain. The convective source term is where F = ∇ · (v ⊗ v) are the Reynolds stresses due to the convective velocities v. This can be further simplified noting the definition of the perpendicular direction, yielding The integral in the numerator of Equation 50 can be identified as a Fourier transform of the source in time and space. Treating it as such, it becomes In turn this is a Fourier transform of a product, or a convolution in spectral space of the Reynolds stress with a Heaviside function H that confines the convection to a the convective region and the reduced eigenmodes. Under this approach, the previous equation yields Assuming henceforth that the Brunt-Väisälä frequency is a discontinuous jump of an amplitude N = Sω c , there is an exact solution for all three wave classes, sub-inertial, inertial, and super-inertial. This assumption provides an approximation of the stratification in a star, but captures its order of magnitude effects. This means that all integrals except the one of the Reynolds stresses can be evaluated. The latter depends upon the turbulence model that is chosen. The one introduced at the beginning of this paper will be examined here. Specifically, with this choice of N , the reduced Poincaré equation becomes ∂ zz ψ n + k 2 n α 2 ψ n = 0 0 ≤ z < s ∂ zz ψ n + k 2 n β 2 ψ n = 0 s ≤ z ≤ L , where and where ω c is the convective overturning time and s = z c − z 1 is the depth of the radiative-convective interface. The boundary conditions are that ψ n (0) = ψ n (L) = 0, and with matching conditions and momentum continuity at the interface leading to the dispersion relationship. With these choices, above equations admit the following solutions for the sub-inertial waves sin (k n β (L − s )) cos (k n βL) sin (k n α s ) sin (k n αz) 0 ≤ z < s sin (k n βz) − tan (k n βL) cos (k n βz) s ≤ z ≤ L , with a dispersion relationship Similarly, the super-inertial waves are with a dispersion relationship Note that with sin (k n α s ) in the numerator, there are certain values of k n α s = mπ with m some integer where this solution is invalid. This provides an additional selection criterion on the values of S that have solutions. Note that the inertial waves already are normalized with c n = 1. The integrals for the denominator in Equation 50 are very similar. Finally, from Equation 62 in André et al. (2017), the vertical wave flux for a single mode is given as Note that this definition of the flux is slightly different from the interfacial flux, which used an approximation of the pressure. Moreover, that interfacial flux is a local model with driving taking place only at the interface, whereas the current model assesses wave driving throughout the convective zone. The definition of the flux given in Equation 64 is consistent with previous studies of gravity wave driving in the bulk of convective regions (e.g., Press 1981;Goldreich & Kumar 1990;Lecoanet & Quataert 2013), where it is seen that the flux for a discontinuous Brunt-Väisälä frequency is F z ≈ F c S −1 , where F c is the convective flux. This scaling can be obtained using Equation 64 if one considers the regime of low-frequency gravity waves in the nonrotating case (where f = 0), which are described within the JWKB approximation, assuming that their horizon- where v c , ω c , and k c are the convective velocity, frequency, and wavevector, respectively, while one also sets ω ≈ ω c and k ≈ k c . Note that within these assumptions the influence of the spatial behaviour of the eigenmodes is not taken into account. Thus, with the definition of the amplitude, the flux is Averaging over the stable region, this becomes The integral in the denominator of the wave flux for the sub-inertial and super-inertial waves are thus for the sub-inertial waves and D n = sech 4 (k n Lβ) for the super-inertial waves. The integral in the numerator can be computed exactly for the convection model discussed in Section 2. Specifically, using the definition of the velocities there, e.g. Figure 6. Scaling of the gravito-inertial wave flux Fz, normalized by the gravity wave flux for the non-rotating case F0, when excited by columnar convection at the equator for waves, where the stiffness is the ratio Brunt-Väisälä frequency to the rotation frequency S = NR/N0 is taken to be (a) 10 5 , (b) 10 3 , and (c) 10 and where 0 = s = L/2, and ψ = π/2. The vertical dashed line denotes the transition between sub-inertial and super-inertial waves and the horizontal line denotes unity. (d) illustrates the scaling of the pure gravity wave flux (F0) normalized by the total convective flux with the stiffness parameter, showing that the wave flux is always below the convective flux, but that the gravito-inertial wave flux can be greatly amplified in comparison. Note that such mode amplification of GIWs has also been seen in a global model (see Figure 7 Neiner et al. 2020) for the sub-inertial waves where the horizontal and time integrals impose ω = 2ω c and k ⊥ = k n /2. For the superinertial waves, this is These waves will attain a maximum flux near the equator, especially for low convective Rossby number where the waves become increasingly equatorially focused. Thus, evaluating these expressions at the equator, one has a wave flux analogous to the section on interfacial waves, but excited by the Reynolds stresses in the bulk of the convection zone. Figure 6 illustrates this flux for several values of the stiffness, where each value of the convective Rossby number is computed such that the dispersion relationships are obeyed, leading to its discrete nature. The sub-inertial waves have an oscillatory character, where some waves achieve a resonance and have a peak in flux. The peak flux arises at moderate convective Rossby numbers below 1/ √ 5, due to α being small and transitioning from super-inertial to subinertial waves. The decay of the flux at lower convective Rossby numbers results from the weakening convective velocities and the increasing horizontal wavenumber of the convection. The peak in the super-inertial waves also occurs near Ro c = 1/ √ 5, above which it decays primarily due to the scaling of the denominator of the flux, which arises from the hyperbolic trigonometric functions in the structure of the eigenmodes and it asymptotes to the flux of pure gravity waves driven by nonrotating convection. When considering Figure 4 and also Figure 4 in Mathis et al. (2016), where Ro c = 1/ √ 5 is distinguished by the dashed gray line. One can see that these phenomena may occur for a majority of low-mass stars along their evolution, in particular during the PMS (or close to the base of their convective envelope) because of the low values of Ro c during these evolutionary phases (and in these regions).
The actual value of the both the nonrotating and rotating fluxes are very dependent upon the value of the stiffness chosen due to the dependence on the average of the eigenfunction in the stable region in the numerator and the normalization in the denominator of the flux derived above. The non-rotating wave flux normalized by the total convective flux is shown in Figure 6(d). Note that this flux F 0 differs from that of Section 4 because the flux defined in Equation 64 has a complex spatial dependence. When averaged over the stable region, this yields F 0 ∝ Q(S)S −4 where Q(S) is the dependence arising from the integral of the source term and the average value of |ψ n ∂ z ψ n | in the radiative region (see Equation 65). However, if one makes the assumptions explained in the Appendix, where the spatial dependence of the eigenfunctions and their dispersion relationship become simple and continuous (see Appendix), one recovers F 0 ∝ S −1 . Thus, if one makes similar assump-tions for the gravito-inertial waves, then F z /F 0 ∝ 1, whereas with the more complex spatial dependence of the exact eigenfunctions and the more intricate dispersion relationship it scales as F z /F 0 ∝ Q(S)S −2 as seen in Figure 6. Finally, both the flux of the IGWs and the GIWs are always weaker than the total convective flux.
6. SUMMARY AND DISCUSSION A model of rotating convection originating with Stevenson (1979) has been extended to include thermal and viscous diffusions for any convective Rossby number in Augustson & Mathis (2019). The scaling of the velocity and superadiabaticity in terms of the colatitude, and Rossby number are outlined in Section 2. Asymptotically at low convective Rossby number and without diffusion, these match the expressions given in Stevenson (1979), as well as the numerical results found in the 3D simulations of Käpylä et al. (2005) and Barker et al. (2014).
Here this rotating convection model has been employed to examine the excitation of gravito-inertial waves (GIWs) by two different channels: one by interfacial excitation and another by Reynolds-stress excitation. First, the convection model is applied to the interfacial wave excitation paradigm developed in Press (1981), where the gravity wave dynamics there is replaced with the GIW wave dynamics computed in Mathis et al. (2014) and André et al. (2017). Both mechanisms are considered since, as seen in Lecoanet et al. (2015), both sources of wave excitation play a role in simulations of gravity wave excitation, with the dominant one being due to the volume integrated Reynolds stresses. Next, with a turbulent convective velocity spectrum in hand, more sophisticated approaches allow for the computation of the wave energy flux in the context of both more realistic variations in the Brunt-Väisälä frequency as well as in a non-interfacial paradigm that includes the Reynolds stresses throughout the convection zone. Such a step has been taken in this paper, which builds upon the methods developed in Belkacem et al. (2009a), Lecoanet & Quataert (2013), Mathis et al. (2014), where the gravity wave and GIW excitation amplitudes and accompanying wave energy injection rate are computed by solving the wave equation driven by a convective source term. This approach provides a general method of computing the wave flux that takes into account the volumetric excitation of the waves and that includes the region in which they are potentially evanescent. Specifically, to assess the influence of the convective Reynolds stresses and of rotation on the GIWs, a wave energy flux estimate is constructed using an explicit computation of the amplitude for both the super-inertial and sub-inertial waves. The convection model of Paper I is then invoked as means of estimating the Reynolds stresses.
In the context of the wave energy flux, distinct parameter regimes have been found that depend upon the mode of excitation (either interfacial pressure perturbations or convective Reynolds stresses), the convective Rossby number (or alternatively the rotation rate), and the stiffness of the convective-radiative interface. The visibility of these regimes depends upon the colatitude selected, with the distinction between them being starkest at low latitudes near the equator and vanishing at the poles due to impact of the Coriolis acceleration on the frequency range over which GIWs may propagate. As depicted in Figure 2, interfacially-excited sub-inertial waves have a peak energy flux near a critical convective Rossby number, but decay below it. Interfaciallyexcited super-inertial waves, on the other hand, have an increasing energy flux with increasing frequency and increasing Rossby number. As a means of comparison, the influence of convective Reynolds stresses on the wave amplitude and their energy flux has been assessed by directly employing the convection model of Paper I. The detailed behavior of the eigenfunctions appropriate for GIWs and how they interact with the convective source is examined in 5. A trend similar to that of interfacial waves is found where there is a decline in the amplitude of the fluxes is found as the convective Rossby number is decreased for both the sub and super-inertial waves. However, there is a large variation in the sub-inertial wave flux for a given convective Rossby number, depending upon whether wave is in resonance or not, leading to the series of peaks seen in Figure 6 where the flux relative to gravity waves in nonrotating convection can be many orders of magnitude larger, but still below the convective flux. The amplitude of the nonrotating flux is computed using the same mathematical formalism as the gravito-inertial waves, but utilizing the proper eigenmodes. The super-inertial waves have an increased flux at lower Rossby numbers reaching a peak at the transitional Rossby number of 1/ √ 5 for the parameters chosen in Figure 6.
If realized, these characteristics of GIWs may have substantial consequences for the transport and mixing of angular momentum, chemical species, and heat in stellar and planetary interiors, in particular during the PMS of low-mass stars or close to the base of their convective envelope, as well as consequences for the seismic observations of them. According to the results presented in §4 and shown in Figure 2, the GIW energy flux due to interfacially-excited waves is likely to be reduced relative to the nonrotating case when the wave Rossby number is not close to the critical one, meaning that any transport mechanisms associated with those waves will be reduced as well. Similarly, as discussed in §5 and shown in Figure 6, the sub-inertial wave energy flux generally decreases at lower convective Rossby numbers, but reaches a peak at moderate Rossby number. What is described here is the energy injection rate by turbulent convective motions into GIWs. However, to get a more complete picture, the damping of GIWs resulting from their interactions with turbulent convection needs to be studied (see Nevertheless, the examination of Be star outbursts in Neiner et al. (2020) have already pointed to tantilizing clues about the role of gravito-inertial waves in angular momentum transport. There, a global description of gravito-inertial wave excitation is employed to compute the a wave spectrum that matches the observations well (see their Figure 7). Finally, the model examined here for the wave energy flux and excitation of GIWs has assumed a particular form of the convective Reynolds stresses that is valid only in local domains. However, this neglects globalscale shearing flows seen in 3D convection simulations in spherical geometry (e.g., Brun et al. 2011;Augustson et al. 2012;Alvan et al. 2015;Emeriau-Viard & Brun 2017) and theoretically predicted (Busse 2002;Julien et al. 2006;Grooms et al. 2010), as well as neglecting the more extremal convective events that can still occur frequently enough to influence the wave energy flux (e.g., Pratt et al. 2017b,a). These events have typically been modeled as collections of plumes for gravity waves (e.g., Schmitt et al. 1984;Schatzman 1993Schatzman , 1996Pinçon et al. 2016Pinçon et al. , 2017, and they are tied closely to the interfacial excitation model as such events are more likely to deform the average interface depth at least in the local region near the plume. Therefore, the model can be further improved by considering more sophisticated models of the structure of the flows, such as applying the models of rotating plumes considered in Pedley (1968) or Grooms et al. (2010). Thus, the formalism developed here will be extended in future work to include the influence of rotation on those plumes as well as a utilizing theoretical models for global-scale flows to better characterize GIW excitation and energy flux.
ACKNOWLEDGMENTS
The authors thank the referee for their very careful reading of the manuscript and their helpful and constructive comments. K. C. Augustson, S. Mathis, and A. Astoul acknowledge support from the ERC SPIRE 647383 grant and PLATO CNES grant at CEA/DAp-AIM. The authors also thank Q. André, U. Lee, C. Neiner, C. Pinçon, and V. Prat for fruitful conversations.
APPENDIX
A. GRAVITY WAVE FLUX IN THE NONROTATING AND LOW-FREQUENCY LIMIT With the appropriate limit and the assumptions made in Goldreich & Kumar (1990) and Lecoanet & Quataert (2013), which also are similar to those made in section 4 of this paper and in Press (1981), one can show that they are equivalent. To do this, recall that the definition of the flux given in Equation 64 is In the non-rotating limit this becomes F z = −ρ 0 ω 2k 2 n |A 2 n ψ n ∂ z ψ n |.
In the asymptotic limit of low-frequency gravity waves the JWKB approximation can be applied where ∂ z ψ n ≈ ik V ψ n , so the flux becomes In this limit, the vertical wavenumber is approximately So, then it can be seen that Now, noting that v w = A n ψ n is the vertical velocity of the wave, one has that Making the assumptions of Goldreich & Kumar (1990), Lecoanet & Quataert (2013), and Press (1981), one has that k n ≈ k c , ω ≈ ω c and v w ≈ ω 2 c /(k c N ), where the subscript c indicates the wave vector (k c ) and overturning frequency (ω c ) of the convection. To obtain this expression of v w , we consider the low-frequency regime where the ratio between the vertical and the horizontal components of the gravity waves' velocity is given approximately by v w /v h ≈ ω/N . In addition, it is assumed, as in Equation (36) of Press (1981) and in Equation (49) of Lecoanet & Quataert (2013), that the horizontal wave velocity is given by v h ≈ v c = ω c /k c . Therefore, the previous expression becomes Now, since u c = ω c /k c (the convective velocity), one has that where M = ω c /N = S −1 is the Mach number or the inverse stiffness (S) and F c is the convective flux. Hence, under these limits and assumptions, the flux definitions have the same scaling. Note that within these assumptions the influence of the spatial behaviour of the eigenmodes is not apparent because |ψ n | 2 ≈ |e ik V z | 2 = 1, which is not the case for the exact solutions used in Equation 65. | 2020-09-23T01:01:07.192Z | 2020-09-22T00:00:00.000 | {
"year": 2020,
"sha1": "60a37fe47c1af43e3f1d31a33d99892af6c46860",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/abba1c/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8fd9ac4611cd6326cdecf9a227222570724b4e2c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15356434 | pes2o/s2orc | v3-fos-license | The cost-effectiveness of a NSCLC patient assistance program for pemetrexed maintenance therapy in People’s Republic of China
Background Eli Lilly and the China Primary Health Care Foundation are currently implementing a patient assistance program (PAP) in China, which allows first-line nonsquamous non-small-cell lung cancer (NSCLC) patients who complete four cycles of pemetrexed induction therapy to receive free, continuous pemetrexed maintenance therapy. Objective To estimate the cost-effectiveness of pemetrexed maintenance therapy vs basic standard care (BSC) and the economic impacts of providing a PAP for pemetrexed maintenance therapy to NSCLC patients who have completed pemetrexed induction therapy in a Chinese health care setting. Methods We developed a novel decision-analytic model to evaluate the long-term costs and clinical efficacy of pemetrexed plus BSC vs BSC alone. We utilized a three-state (progression-free survival, progressed disease, and dead) partition survival model for both the clinical and economic aspects of the analysis. Cost and health utility estimates were derived from the literature. We performed a scenario analysis to estimate the real-world impact of introducing the PAP in China by comparing the use of the PAP vs non-PAP. Model uncertainty was evaluated using one-way and multivariate probabilistic sensitivity analysis. Results Compared to BSC, pemetrexed plus BSC resulted in a gain of 0.22 years of life (95% credible range [CR]: 0.04–0.46) and 0.13 quality-adjusted life years (95% CR: 0.04–0.26) per patient, at an increased cost of $28,105 (95% CR: −$22,720 to $48,646) without a PAP and $3,068 (95% CR: −$1,263 to $9,163) with a PAP. The incremental cost-effectiveness ratio for pemetrexed plus BSC vs BSC alone was cost-prohibitive at $222,700 for non-PAP, but cost-effective at $24,319 with a PAP. Conclusion Our study suggests that maintenance pemetrexed therapy following pemetrexed induction for patients with advanced NSCLC is likely to be highly non-cost-effective in the absence of a PAP, but the pending implementation of the PAP promises to make it cost-effective, with a >90% probability of cost-effectiveness at a Chinese willingness-to-pay threshold per quality-adjusted life year.
Introduction
The global incidence and prevalence of lung cancer are rapidly increasing. In 2015, the estimated total number of incident lung cancer cases was 733,300. 1 Approximately 85% of these cases are non-small-cell lung cancer (NSCLC), a common lung cancer which develops slowly compared to small cell lung cancer but is less receptive to chemotherapy treatments. 2 Shi et al had squamous cell carcinoma, and 67.9% had advanced stage disease (IIIB-IV). 3 The treatment for advanced stage NSCLC typically consists of chemotherapy, surgery, and radiation therapy. After chemotherapy induction cycles, chemotherapy maintenance regimens may use a different drug to expose patients to an agent with a different pharmaceutical mechanism. [4][5][6] Alternatively, successful combination induction regimens may utilize the least toxic agent from the combined induction regimen as maintenance therapy, so that a proven beneficial therapy may continue to be administered while also decreasing a patient's adverse events. 7,8 Pemetrexed is a chemotherapy drug that inhibits the formation of precursor purine and pyrimidine nucleotides, preventing the formation of DNA and RNA that fuels the growth and survival of both normal and cancer cells. 9 Pemetrexed first gained approval in the USA in 2004 and is indicated (within the USA) for use in locally advanced or metastatic nonsquamous NSCLC after initial treatment in combination with cisplatin. It is also indicated for use as maintenance therapy for patients whose disease has not progressed after four cycles of platinum-based first-line chemotherapy and after prior single-agent chemotherapy. 10 Pemetrexed was approved by the Chinese Food and Drug Administration in 2011 for the initial treatment of patients with locally advanced or metastatic nonsquamous NSCLC when used in combination with cisplatin. 11 In China, Eli Lilly and the China Primary Health Care Foundation are currently implementing a patient assistance program (PAP), which allows first-line nonsquamous NSCLC patients who complete four cycles of pemetrexed induction therapy to receive free, continuous pemetrexed maintenance therapy. To date, the clinical and cost implications of the PAP have yet to be determined, and no health economic analyses have considered the topic. The objective of our study is to estimate the cost-effectiveness of pemetrexed maintenance therapy vs basic standard care (BSC) and the economic impacts of providing a PAP for patients who have completed pemetrexed induction therapy in a Chinese health care setting.
Methods Approach
We developed a novel decision-analytic model to evaluate the long-term costs and clinical efficacy of pemetrexed plus BSC vs BSC alone. Only direct medical costs related to treatment regimens were estimated, including induction therapy, maintenance pemetrexed therapy, BSC therapy, and major adverse events. The analysis was conducted from a Chinese health care perspective using a lifetime horizon to enable assessment of life expectancy. All costs are reported in USD, and long-term costs and outcomes were discounted at 3% per year. 12 We used a societal willingness-to-pay threshold of 39,900 USD per additional quality-adjusted life year (QALY), representing a threefold increase of the gross domestic product (GDP) per capita of China as recommended by the World Health Organization. 13,14 The range from GDP per capita to three times GDP per capita (13,300-39,900 USD) was considered cost-effective, whereas any value less than GDP per capita ($13,300) was considered "very cost-effective". All costs were converted to USD using an exchange rate of 1 USD =6.57 RMB. 15 The models were programmed in Microsoft ® Excel (Microsoft Corporation, Redmond, WA, USA). Ethical approval was not obtained because this is a health economic model based entirely on previously published literature and no patient level data was analyzed.
Population
The PARAMOUNT trial examined the efficacy of pemetrexed continuation maintenance therapy vs placebo in patients with advanced nonsquamous NSCLC whose disease had not progressed during four cycles of pemetrexed-cisplatin induction chemotherapy. 16 The trial included 939 NSCLC patients, of which 539 patients did not progress while on induction therapy; then these patients were randomized 2:1 to receive either pemetrexed maintenance therapy or BSC. 16 The median follow-up at the overall survival (OS) data cutoff date was 12.5 months (95% confidence interval [CI]: 11.1-13.7 months) for all patients and 24.3 months (95% CI: 23.2-25.1 months) for surviving patients. 17 Patients randomized to pemetrexed had a significant reduction in the risk of disease progression compared with the placebo group (hazard ratio [HR]: 0.62; 95% CI: 0.49-0.79; p<0.0001) 16 and a statistically significant reduction in the risk of death (HR: 0.78; 95% CI: 0.64-0.96; p=0.0195). 17 The median progression-free survival (PFS) and OS for those on pemetrexed maintenance therapy were 4.1 months and 13.9 months compared with 2.8 months and 11 months for BSC, respectively. 16,17 In addition, pemetrexed was well-tolerated by patients, although laboratory Grade 3-4 adverse events were more common in the pemetrexed group (
Model structure
We utilized a three-state (PFS, progressed disease, and dead) partition survival model, a commonly used method in advanced oncology indications, for both the clinical and
101
NSCLC PAP for pemetrexed maintenance therapy in China economic aspects of the analysis. This approach uses the area under two survival curves (PFS and OS) to calculate the proportion of patients in each health state at a given time point. 18 Partition survival models bypass the need to estimate discrete transition probabilities and avoid the need for additional assumptions by modeling the survival data directly. The approach is relevant for modeling the range of events typically considered in the analysis of cost-effectiveness of health technologies including OS, disease-free survival, and PFS.
Hypothetical patients began the model in the PFS health state, where they remained until they either experienced disease progression or death from other causes. Patients in the progressed disease state remained there until they either died from progressed disease or from other causes. Patient survival, quality-adjusted survival, and health care cost were estimated for each model cycle and then summarized over the entire time horizon for both treatment options. Model cycles were 1 month each to account for the high rates of disease progression in NSCLC. We utilized a 5-year time horizon, at which point 95% and 98% of pemetrexed plus BSC and BSC alone patients had died, respectively; we chose this short time horizon due to the rapid rate of disease progression and in recognition of the limits of data extrapolation validity.
Clinical inputs
We fit parametric Weibull curves to digitized copies of PARAMOUNT-reported BSC arm Kaplan-Meier PFS (as reassessed at the time of OS data cutoff) and final OS data 17 using a maximum likelihood approach. 19 Parametric survival modeling allows extrapolation of hypothetical patient outcomes beyond trial-reported follow-up time. To calculate the pemetrexed plus BSC arm survival, we applied the trialreported PFS and OS HRs at the time of OS data cutoff to the BSC arm's parametric curves (Table 1). We also modeled adverse event rates for anemia, neutropenia, and fatigue based on PARAMOUNT findings. Only adverse events with Grade 3-4 toxicity were considered.
Quality of life inputs
We derived health state utility values for progression-free and progressed disease health states from a community-based study in advanced NSCLC from the UK, which used the standard gamble interview and visual analog scale to assess quality of life (Table 1). 20 Utility values were multiplied by the number of patients in each health state for each month in the time horizon, then summed for comparison between groups. Based on results of an EQ-5D questionnaire given during the PARAMOUNT trial, we assumed no significant differences in health conditions between the two study arms during maintenance therapy, and thus the same utility values were assigned to both treatment groups.
Cost inputs
Cost estimates for the model were largely derived from a previous Chinese perspective cost-effectiveness model of maintenance pemetrexed after cisplatin and pemetrexed chemotherapy for NSCLC (Table 2). 21 Drug costs were local to China and were sourced from GBI health. 22 For drug costs, we included the cost of induction therapy (four cycles) for both arms. We assigned a body surface area (used for calculating pemetrexed dosage) of 1.72 m 2 for all patients. A multiplier of 0.3 was used for BSC costs between progressed and nonprogressed disease. 21
Analysis
We conducted a cost-effectiveness analysis comparing pemetrexed plus BSC vs BSC alone and considered both costs and clinical outcomes. In the pemetrexed plus BSC arm, we considered scenarios where 1) pemetrexed is unsubsidized using the PAP and 2) pemetrexed is 100% subsidized using the PAP. We calculated life years, QALY, and lifetime direct medical costs for both treatment groups. The incremental health benefit of pemetrexed plus BSC vs BSC alone was calculated as the incremental difference between effectiveness and risk changes, with both risk and benefit measured using QALYs. To calculate the incremental net benefit, PAP and non-PAP QALYs were multiplied by the societal willingness to pay per additional QALY, then we subtracted the costs to the health care payer resulting in either a benefit or cost to society. If the value is positive, it is a net benefit to society; if negative, it is a net cost to society. The incremental cost-effectiveness ratio (ICER) was calculated as the difference in costs divided by the difference in QALYs.
Model uncertainty was evaluated using one-way and multivariate probabilistic sensitivity analysis. One-way sensitivity analysis was performed using values derived from CIs or reasonable ranges as determined from published sources. Multivariate probabilistic sensitivity analysis was performed using Monte Carlo simulation, in which the model inputs were drawn from probability distributions based on parameter ranges representing the uncertainty in the estimate.
An additional analysis was undertaken to better understand the real-world impact of introducing the PAP in China. This analysis compares the use of the PAP vs non-PAP. In the non-PAP arm, all patients are expected to receive BSC after induction therapy (Figure 1). In the PAP arm, the percentage of patients receiving maintenance therapy with pemetrexed was varied from 0% to 100%. cost-prohibitive at $222,700 for non-PAP, but cost-effective at $24,319 with a PAP.
Results
Even though pemetrexed was not charged to payers in the PAP scenario, the increase in costs compared to BSC was primarily driven by the extended survival brought by pemetrexed, and secondarily by the cost of adverse events with pemetrexed. The PAP and non-PAP scenarios both accounted for pemetrexed maintenance therapy and thus had equivalent survival outcomes. However, the PAP scenario represented a cost-savings of $25,037 (95% CR: $33,516-$18,402) compared to non-PAP.
The results of the one-way sensitivity analyses are shown in Figure 2, incremental cost, QALYs, and the ICER were primarily influenced by the OS HR.
In probabilistic sensitivity analysis, PAP was cost-effective vs BSC in 90.2% of simulations at a $39,900 willingnessto-pay per QALY threshold, and 95.77% of simulations at a $45,662 (¥300,000) threshold. The cost-effectiveness scatterplot and cost-effectiveness acceptability curve graphs are shown in Figures 3 and 4. The additional analysis compared a scenario of a PAP vs non-PAP. In the non-PAP arm, all patients received BSC, whereas in the PAP arm, the percentage of patients receiving pemetrexed was varied from 5% to 25%. The results of this analysis (Table 4) show that as the percentage of patients receiving pemetrexed increases, both costs and QALYs increase consistently, resulting in the same ICER for each iteration.
Discussion
We developed a novel cost-effectiveness model to estimate the incremental cost and benefits associated with a PAP program for pemetrexed maintenance therapy in NSCLC patients from a Chinese societal perspective, based on the PARAMOUNT clinical trial and other published sources. We found that although pemetrexed maintenance therapy comes at considerable cost and is not cost-effective for the Chinese health care system at full price to payers, the addition of a PAP significantly drives the overall cost downward, making the survival benefits of pemetrexed maintenance therapy cost-effective and more widely available to patients.
A previous cost-effectiveness analysis looked at pemetrexed maintenance therapy without a PAP and similarly concluded that maintenance therapy was not cost-effective, suggesting a price reduction or dose adjustment was needed before widespread adoption by the Chinese health care system. 20 Our analysis considers the former recommendation, with potential manufacturer assistance in the form of a 100% discount for maintenance therapy given the patient completes a full course of nondiscounted induction therapy. Once a PAP is implemented in China, our model suggests that pemetrexed maintenance therapy will be highly likely (>90%) to be cost-effective vs standard care.
Limitations
Our model has a number of limitations worth noting. First, the clinical data used in our model was limited to a single clinical trial and did not include Chinese patients. 16,17 Although PARAMOUNT was a relatively large cancer trial, the incorporation of data from other studies may provide more robust estimates. Regarding the absence of Chinese patients, Lee et al 22 and Liubao et al 24 concluded that the influence of such differences between Chinese individuals and other nationalities were not significantly different. 22,24 Similarly, our quality-of-life estimations were informed by a single study.
Second, we assumed that the survival HRs reported by PARAMOUNT investigators, summarizing approximately 2 years of follow-up, were held constant over the 5-year time horizon of our model. In the absence of long-term followup data, simplifying but data-driven assumptions are often necessary in modeling studies. Nonetheless, our modeled parametric fits to the rapid rates of progression and death in PARAMOUNT resulted in all patients progressing before the
105
NSCLC PAP for pemetrexed maintenance therapy in China second year, and <1% of patients surviving to 5 years; thus, the brief time horizon of our analysis limited the impacts of extrapolating the distant future from relatively brief clinical trial. 16,17 Third, multiple assumptions were undertaken in the original cost estimations presented by Zeng et al. 21 BSC costs were not available for Chinese patients with advanced nonsquamous NSCLC, and so costs for patients with advanced gastric cancer were used. 21 Finally, we did not consider other available regimens such as erlotinib for maintenance therapy of NSCLC following pemetrexed induction therapy as we evaluated patients receiving pemetrexed who would otherwise be receiving BSC if not for the PAP.
Conclusion
From the perspective of the Chinese health care system, our study suggests that maintenance pemetrexed therapy after pemetrexed induction for patients with advanced NSCLC is likely to be highly non-cost-effective in the absence of a PAP, but the pending implementation of the PAP promises to make it cost-effective, with a >90% probability of costeffectiveness at a Chinese willingness-to-pay threshold per QALY. Ongoing and future comparative clinical trials will provide valuable insight into the optimal treatment in secondline NSCLC, but presently, pemetrexed maintenance therapy combined with a PAP offers the best treatment option for NSCLC patients. | 2018-04-03T01:46:51.916Z | 0001-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "39c9a29ad3712a9aaf7811cf448414baf8387021",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=34750",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39c9a29ad3712a9aaf7811cf448414baf8387021",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": []
} |
117898534 | pes2o/s2orc | v3-fos-license | Recent Advances in Several Organic Reaction Mechanisms
This Review is a brief account of our theoretical contributions in seven research communications in the field of reaction mechanisms. Some mechanisms were corrected as in the case of the Baeyer-Drewsen indigo synthesis. When two very different reaction mechanisms had been proposed, as in the Clemmensen Reduction, a unified theory was provided. In other cases there were no reaction mechanisms at all, as in the Baeyer-Emmerling synthesis of indigo and in the Froehde Reaction for opioids. This deficit has been solved. The reaction that controls fructosazone regiochemistry has been described, and an internal process in a mixed osazone formation has been explained. All the proposals are based on well known reactivities and we provide complete and coherent reaction series with commented steps.
Introduction
In this Review we present straightforward the proposed reaction mechanism, abbreviating the justification arguments given in the original communications.
When two reaction mechanisms had been proposed for the same reaction, as in the Baeyer-Drewsen indigo synthesis or in the Clemmensen Reduction, a single and coherent reaction mechanism is presented. In other cases the reaction mechanism was missing, as in the Baeyer-Emmerling synthesis of indigo and in the Froehde Reaction. This absence has been filled. The presence of simple halochromism has been pointed out in some instances of the latter reaction.
There will be made comments about the novelty of the mechanism and which intermediates present in other sequences were discarded and why.
Only the essential references are provided, for the bulk of them the reader is sent to the original papers.
The Baeyer-Drewsen Synthesis of Indigo
The course of this synthesis, starting from o-nitro-benzaldehyde and acetone in alkaline medium, is presented in Wikipedia as occurring through a nitrone intermediate [1], Figure 1. This is an error because there are no experimental examples of nitro group participations in dehydrations of this type. This misconception is mentioned in a 1932 communication [2].
Besides, the reaction between the carbanion and the nitro group can also be ruled out because the nitro group even doesn't react with a strong reducing reagent such as sodium borohydride.
Of course, we discarded the occurrence via nitrone. In 1996 Ranganathan [3] proposed the formation of onitrosobenzoylacetone via an oxido-reduction step. However, in that sequence the next intermediates are very crowded and a different route with precursors with no steric hindrance is provided, Figure 2. Thus, the complete reaction sequence we provided [4] is as follows: the first reaction product is a ketol, 4-hydroxy-4-(onitrophenyl)-2-butanone. This compound doesn't yield the second step of the Claisen-Schmidt Condensation, the dehydration to form the benzylidene derivative. This is due to the nitro group that creates an alternate reactivity.
It is well known that hydrogen in α-position to a nitro group is acidic. This effect is transmitted also by an intermediate double bond (vinylene bridge). An example of this redox process is the conversion of o-nitrotoluene into anthranilic acid, 2-aminobenzoic acid, in alkaline medium [5]. So, the oxidation of the obtained ketol, with the simultaneous reduction of the nitro group to nitroso is indicated in Figure 3. The nitroso group reacts with the β-diketone active hydrogens (Ehrlich-Sachs reaction) [6,7]. The resulting hydroxylamine, less reactive than the nitroso group, permits reaction at the outward acetyl group, i.e., a β-diketone aci break down, giving acetic acid and an enolate, Figure 4. The above enolate can form indolenine-3-one, Figure 5. A competing reaction of this enolate is nucleophilic addition to an indolenine-3-one molecule, forming the indigo frame, Figure 6. This intermediate permits the next steps occur smoothly since there is no steric hindrance like in the precursors postulated previously [3]. A dehydration followed by isomerization, both involving active hydrogens, give the indigo molecule, Figure 7. Scientific criticism has been applied in order to accept or reject each proposal. Thus, a complete and authoritative reaction sequence was provided.
The Clemmensen Reduction
Two principal proposals on this theme have been advanced: the 'Carbanionic Mechanism' [8] and the 'Carbenoid Mechanism' [9].
There must not be two very different mechanisms in order to explain the same reaction. So, a complete and coherent reaction mechanism is presented [10]. It involves the formation of a free carbene as well as a zinc carbene and two different carbanionic species as intermediates. This point of view is based on well known reactivities.
The reduction of an aldehyde or ketone to alkane analogs by means of amalgamated zinc and hydrochloric acid is explained as follows, Figure 8. The carbonyl compound is protonated by a solvated hydroxonium ion [11,12]. The oxonium chloride is in resonance with a carbonium ion electromer. This carbocation reacts, in a chemisorptions step, with elemental zinc, an electrodoting reagent [13], leading to a two electron reduction. Reaction of the organometallic intermediate with hydrochloric acid yields a carbene, with concomitant water and zinc chloride elimination. The electron deficient species reacts with another zinc atom (metal carbene complex) giving a carbanion and an electron deficient metal. This zwitter ion reacts with hydrochloric acid and yields a deoxy organometallic derivative. A carbanion is formed by ionization, and then protonation affords the reduction product.
The ionization can be assisted by interaction with a chloride ion, eliminating zinc chloride. This is supported by the fact that zinc chloride reacts with hydrochloric acid forming the tetrachlorozincate anion, ZnCl 4 = , (zinc receptivity to chloride ions).
The Baeyer-Emmerling Synthesis of Indigo
We present highlights of our paper on this theme [14].
Oxidation of Indigo to Isatin (2,3-dioxoindoline)
The oxidation mechanisms are rarely treated in Organic Chemistry, emphasizing the utility and the experimental conditions. The nitric acid oxidation of indigo is interesting since it involves double bond addition, protolysis, epoxidation, ring opening, a second protolysis, and a carboncarbon break down giving two carbonyl groups, i.e., two lactam groups of two isatin molecules, Figure 9.
Cupric Chloride Oxidation of 3-aminoxindole to Isatin
This is a typical example of an ion-radical process, Figure 10.
Zinc Reduction
The reduction mechanism of the above obtained isatin chloride is in Figure 12. The first neutral intermediate is the chloro ketone resulting from double bond saturation via acid catalysis and reduction with zinc. A second reduction step yields an electrodoting carbanion that can react with a remaining chloro ketone molecule to give the indigo frame. Finally, aereal oxidation of leucoindigo gives indigo blue.
The reactive carbanin can be stabilized by resonance as a chloro zinc enolate with O-metallation, Figure 13.
Indirubin Formation in the Baeyer-Emmerling Synthesis
There was no mechanism explaining how indigo red (indirubin) is formed in this synthesis. Two routes to this important co-product of indigo blue were advanced [15]. Indirubin is actually employed in cancer treatment, [16].
First Route to Indirubin
This consist in the reaction of the carbanion arose from the reduction steps of isatin chloride, with the intermediate αchloro ketone, Figure 14. This is a rather complex six step synthesis that involves: 1, nucleophilic addition to the carbonyl group (second option to chlorine substitution); 2, intramolecular epoxide formation; 3, ring opening by protolysis; 4, enol formation; 5, isomerization; and 6, aereal oxidation.
Second Route to Indirubin
The other way to indigo red in this synthesis is reaction of the carbanion from zinc/acetic acid reduction of isatin chloride with remaining isatin chloride, Figure 15.
In this secondary condensation reaction, the resulting alkoxide reacts with acetic acid and the hydroxy group is dehydrated. Finally, the reactive chloro imine is hydrated and the unstable 2,2-chloroalcohol yields the carbonyl group (lactam).
However, this mechanism does not account for the regiochemistry, i.e., why the reaction proceeds to C-1 and not to C-3.
The explanation given in our paper [19] is as follows: after the phenylhydrazone formation, the only existing difference is that at C-1 there is a primary alcohol, whereas at C-3 there is a secondary one, being the primary alcohol more reactive in the presence of the base than the secondary one. Since nitrogen is a Lewis base, the β-nitrogen can react with the primary alcohol, a better hydrogen donor than the secondary one. Then an oxido-reduction occurs at C-1 and C-2, i.e., a 1,4-hydrogen transfer via a cyclic, concerted five-member reaction mechanism (internal catalysis).
Thus, the regiochemistry has been explained, Figure 17.
Reactivities Involved in a Mixed Osazone Formation
The reaction of D-mannose phenylhydrazone with pbromophenylhydrazine was accomplished in order to settle to which of the Weygand´s proposed routs for osazone formation the reaction was in accordance with the study [22].
The resulting intermediate splits off p-bromoaniline and aniline (70/30). However, this experimental ratio has not been explained.
The reactions involved in an acid catalyzed process are in Figure 18.
Figure 18. Mixed osazone formation reactions via acid catalysis
In a previous paper [20] these reactions are discussed. In the mixed ene-bis-hydrazine, the α-nitrogen in the aniline fragment is more basic than the other α-nitrogen in this molecule. This can be deduced since aniline (pKa=4.60) is more basic than p-bromoaniline (pKa=3.86) [23]. Figure 18 shows the preferred protonation in the ene-bis-hydrazine, i.e., in the aniline segment. This protonation would favour a ratio aniline/p-bromoaniline. But the experimental ratio is the opposite: p-bromoaniline/aniline, 70/30. Thus, this step must be an internal process without acid catalysis; the enamine reactivity and the leaving group being of utmost importance, as we will see.
In the mixed ene-bis-phenylhydrazine, the inductive effect of the bromine atom is transmitted to the end of the π-system creating a δ+ at the α-nitrogen since the p-orbital of Br and N are both involved in the π-system (long distance effect), Figure 19. The electron donor effect of the enamine vicinal to the aniline group, combined with the δ+ in the p-bromoaniline fragment, con form a phenylhydrazone at C-1 and an imino group at C-2, with the concomitant elimination of a good leaving group, since the negative charge is in the nitrogen with a previous δ+, as indicated in Figure 19.
On the other hand, the δ+ at the α-nitrogen decreases the reactivity of the vicinal enamine group since the formation of an adjacent positive charge is hampered or unfavourable, leading to the secondary product, Figure 20. The ketimino hydrazone and aldimine hydrazone above formed react as indicated in Figure 21, yielding two different osazones.
The Mechanism of the Froehde Reaction
The Froehde reagent, prepared from molybdic acid or sodium molybdate dissolved in sulphuric acid is reduced by phenols and some alkaloids, especially morphine, Figure 22.
This colour test is used for the preliminary identification of opioids, morphine gives blue. Since the reaction course and the end products were not known, we advanced a reaction mechanism in accordance with established reactivities and the chemical deportment of the inorganic reagent, molybdic acid [24]. There can be two redox steps. The first is a hydroxylation process, Figure 23. The electron donor phenolic section reacts with the electron acceptor molybdic acid. Proton transfer and aromatization yields an organometallic intermediate. Finally protolysis affords a pyrocatechol derivative and molybdene dioxide hydrate. The dioxide is coloured violet.
The second reduction step is oxidation to the ortho quinone, Figure 24. After an acid catalyzed esterification, a redox reaction is promoted by protolysis. A yellow ortho quinone is obtained plus hydrated molybdene dioxide. The mixture is green due to the blue/yellow combination.
In the original communication there is a discussion about the colours obtained with synthetic opioids with a methoxy group at C-3. These compounds give different colours and we concluded that these results are probably due to halochromism (coloured salts).
Conclusion
Discarding clearly erroneous ancient routes on the Baeyer-Drewsen indigo synthesis, there were two more recent proposals. A careful study of the involved reactivities permitted us discriminate between the correct and the erroneous ones. This was done on the basis of known experimental work. Thus, it was possible establish a single and complete reaction mechanism for this synthesis.
A more complex situation was found in the case of the Clemmensen Reduction because there were two very different reaction mechanisms: the 'Carbanionic Mechanism' and the 'Carbenoid Mechanism'. Notwithstanding that both routes had some experimental evidence; there must not be two different reaction mechanisms for the same reaction. However, it was possible present a unified theory involving both carbenes and carbanions as well.
The ancient Baeyer-Emmerling synthesis of indigo had no reaction mechanisms at all. So, this route was updated giving this missing item. The reactivity of each functional group was carefully taken into account in accordance with the specialized literature. Another communication related to this synthesis deals with the formation mechanism of indigo red (indirubin), a co-product of indigo blue. This red product is of actual importance since it is used in cancer treatment.
Other group of compounds studied in this communication is the Carbohydrates. The regiochemistry of the Heyns Rearrangement has been elucidated. Thus, the missing theory of osazone formation has been provided and it is in accordance with experimental results realized in Russia employing sophisticated materials such as mixed phenylhydrazines, one of them with radioactive nitrogen.
The reactivity in a case of mixed osazone formation has been explained. These important reactions involve an unusual 'internal process'.
Finally, the Froehde Reaction mechanism has been proposed for the first time. This was done after a careful study of molybdenum chemistry since the reagent in this colour test is molybdic acid which is reduced to several oxides and salts. Some synthetic opioids give very different colours to the blue obtained with morphine. This is because there is no phenolic group in their molecules, and there is just halochromism, salt formation with the sulphuric acid used in the test. | 2019-05-07T13:49:47.701Z | 2019-04-13T00:00:00.000 | {
"year": 2019,
"sha1": "b0d04ba04e730a897594da5d64f3aa80fdbcbb74",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.mc.20190701.14.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "40a967b53dd5bc8c310180caf0fe9219775afce1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119182889 | pes2o/s2orc | v3-fos-license | Observations of the Crab pulsar with the MAGIC telescopes
We report on the observations of the Crab pulsar with the MAGIC telesopes. Data were taken both in the mono-mode ($>25$ GeV) and in the stereo-mode ($>50$ GeV). Clear signals from the two peaks were detected with both modes and the phase resolved energy spectra were calculated. By comparing with the measurements done by Fermi-LAT, we found that the energy spectra of the Crab pulsar does not follow a power law with an exponential cutoff, but that it extends as a power law after the break at around 5 GeV. This suggests that the emission above 25 GeV is not dominated by the curvatura radiation, which is inconsistent with the standard prediction of the OG and SG models.
Introduction
The Crab nebula is the compact object left over after a historic supernova explosion that occurred in the year 1054 A.D.. The pulsar B0531+21 (also commonly named Crab pulsar) is located at its center, and emits strong pulsed radiation in a wide energy range from radio to high energy gamma-rays. The Crab pulsar and a few other pulsars are among the brightest known sources at 1 GeV. However, a spectral steepening made their detection above 10 GeV elusive despite numerous efforts. The energy thresholds of imaging atmospheric Cherenkov telescopes (IACTs) were, in general, too high, while the gamma-ray collection area of satellite-borne detectors were too small to detect pulsars above 10 GeV. On the other hand, a precise measurement of the energy spectrum at and above the steepening leads to an important verification for the standard pulsar models. In the case of the Polar Cap (PC) model [1] , so-called superexponential cutoff is expected, while the Outer Gap (OG) model [2] and Slot Gap (SG) model [3], predict a clear exponential cutoff. The highest energy of the detected photons can be directly converted to the lower limit on the distance of the emission region from the stellar surface, which should be a few times the stellar radius according to the PC model. In 2008, the MAGIC telescope detected the Crab pulsar above 25 GeV [4] with the newly implemented trigger system, the Sum trigger [5]. This detection excluded the PC model. In August 2008, the new satellite borne gamma-ray detector with 1 m 2 collection area, Fermi-LAT, became operational and it could measure the spectra of gamma-ray pulsars up to a few tens of GeV. The spectra measured by Fermi-LAT could be described with a power law with an exponential cutoff, which also rejected the polar cap model and supported the OG and the SG model. However, the cutoff energy of the Crab pulsar spectrum determined by Fermi-LAT was ∼ 6 GeV, while MAGIC detected the signal above 25 GeV. In order to verify the exponential cutoff spectrum, i.e. OG and SG models, the precise comparison of the energy spectra measured by the two instruments is needed. Here we present the spectral study of Crab pulsar, using the public Fermi-LAT data and four years of MAGIC data recorded by the single telescope and the stereoscopic system.
MAGIC telescope
The MAGIC telescope is a new generation IACT located on the Canary island of La Palma (27.8 • N, 17.8 • W, 2225 m asl). It consists of two telescopes with a reflector diameter of 17 m. The first telescope was build in 2002-2003 and have been operational since 2004. Thanks to the world largest reflector, the energy threshold of the first MAGIC telescope with the standard trigger is 60 GeV, that is the lowest among IACTs. In order to detect gamma-ray pulsars, the new trigger system called Sum trigger was developed and implemented in October 2007. It reduced the energy threshold further, down to 25 GeV, which resulted in the detection of the Crab pulsar [4]. In 2009, the second MAGIC telescope was build ∼ 80 m apart from the first telescope. The second one is basically a clone of the first one, while the Sum trigger system is not yet implemented to it. We observed the Crab pulsar with the stereoscopic mode from 2009. The stereo trigger requires a coincidence of the triggers of both telescopes. For a technical reason, the Sum trigger in the first telescope cannot participate in the stereo trigger, i.e., stereoscopic observations were based on the standard trigger for both telescopes. The energy threshold of the stereo mode is about 50 GeV.
Mono-mode observations
MAGIC observed the Crab pulsar with a single telescope with the Sum trigger in winter 2007-2008 and winter 2008-2009. After the careful data selection, total effective observation time was 25 hours and 34 hours for the first and the second campaign, respectively. The energy threshold of these observations are 25 GeV. Normally, IACT technique utilizes many image parameters to distinguish between hadron events and gamma-ray events. However, in the case of mono-mode observations at the very low energy regime below 60 GeV, the image parameter are almost powerless except for the Hillas parameter ALPHA. Therefore, the hadron background rejection was done only based on ALPHA. The light curve of the Crab pulsar obtained with the monomode observation is shown in the upper panel of Fig. 1. Following the usual convention [6] of P1 (phase interval -0.06 to 0.04) and P2 (0.32 to 0.43), the numbers of excess events in P1 and P2 are 6200 ± 1400 (4.3 σ) and 11300±1500 (7.4σ). By summing up P1 and P2, the excess corresponds to 7.5σ. The background level was estimated by using the so-called Off Pulse phase (0.52 -0.88) [6]. Based on these excess events, the phase resolved energy spectra of the Crab pulsar above 25 GeV were computed as shown in Fig. 2. They can be well described by power laws and the best fit parameters are summarized in Table 1. The energy spectrum measured by Fermi-LAT is also shown in the same figure. For the Fermi-LAT points, 1 year of the Fermi-LAT data (from August 2008 to August 2009) were used. The best fit parameters obtained by the unbinned likelihood analysis are summarized in Table 1. The continuation from the Fermi-LAT measurements to the MAGIC measurements is rather smooth, while it is clear that the exponential cutoff spectrum determined by Fermi-LAT is not valid in MAGIC energies. A detail statistical analysis showed the inconsistency amounts to 6.7σ, 3.0σ, and 5.8σ for P1 + P2, P1 and P2, respectively. Further details of the results of the mono-mode observations are presented in [8], [9] and [10].
Stereo-mode observations
As mentioned before, MAGIC observed the Crab pulsar with the stereo-mode since 2009. Though the energy threshold of the stereo-mode (50 GeV) is higher than the mono-observation with the Sum trigger (25 GeV), the VHE tail of the spectrum allows us to detect the Crab pulsar above 50 GeV. The advantage of the stereo-mode observation is higher background rejection power, better angular resolution and better energy resolution. In the case of the stereo-mode observations, by using images from both telescopes, one can reconstruct the arrival direction of the gamma-rays better than in an ALPHA analysis. The estimation of the shower maximum height is also more precise than in the mono-mode, which leads to the higher background rejection and the better energy resolution. The lower panel of Fig. 1 shows the light curve of the Crab pulsar obtained by stereo-mode observations. The total observation time used in this analysis is 73 hours. Since the pulses are much narrower than the conventionally defined P1 and P2 phases, the evaluation of the statistical significance of the excess was done by Z 2 10 -test, H-test and χ 2 test, which gave 8.6σ, 6.4σ and 7.7σ, respectively. By fitting two Gaussians to the two peaks, the peak positions are estimated to be 0.005 ± 0.003 and 0.3996 ± 0.0014, while the corresponding FWHMs are 0.025 ± 0.007 and 0.026 ± 0.004. By defining the signal phases as ±2σ of the fitted Gaussian around the peaks, the significance of the excess is 10.4σ, 5.5σ and 9.9 σ for P1 + P2, P1 and P2, respectively. The phase resolved energy spectra are also calculated and shown in Fig. 3. The dark red squares denote the spectra of the ±2σ phase interval described above, while the yellow squares denote the ones with the conventional P1/P2 definitions. They smoothly connect with the mono-mode measurements and follow a power law. Further details of the results of the stereo-mode observations are presented in [11].
Discussion and Conclusion
The new measurements with MAGIC-mono and MAGICstereo discovered that the energy spectrum of the Crab pulsar does not roll off as fast as the exponential cutoff but extends as a power law after the break at around 5 GeV. This suggests that the emission above 25 GeV is not dominated by the curvature radiation as the standard OG and SG models predict. Further theoretical studies and more observations are needed to understand the emission mechanism of the Crab pulsar. A theoretical interpretation of the MAGIC measurements by K. Hirotani is presented in [9], [10] and [11]. The phase resolved energy spectra of the Crab pulsar measured by Fermi-LAT and MAGIC with the Sum trigger (mono-mode). The energy spectra measured by Fermi-LAT is consistent with a power law with an exponential cutoff, while the MAGIC measurements above 25 GeV deviate from their extrapolation. The inconsistencies amount to 6.7σ, 3.0σ and 5.8σ level for P1 + P2, P1 and P2, respectively. 0.67 ± 0.02 1.95 ± 0.03 5.9 ± 0.7 10.0 ± 1.9 3.4 ± 0.5 Table 1: The best fit parameters of the spectra for different phase intervals. The second to forth columns are obtained with Fermi-LAT data assuming the spectral shape of dF (E) | 2011-09-28T04:58:34.000Z | 2011-09-28T00:00:00.000 | {
"year": 2011,
"sha1": "89d0e3b7d057426fa2ad836764a5b86bfe4609b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1f4ef341551faa9844a83a60edb963f5eb0877e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234638194 | pes2o/s2orc | v3-fos-license | Unique Transcriptome and Gene Expression Analysis of Rice Seedling Reveals Different Cadmium Response Regulatory Mechanisms Between Indica and Japonica Rice
Background: In general, the Cd content in indica rice is usually higher than that in japonica. However, the mechanism for this discrepancy is unclear. Thus, understanding the genetic and molecular basis of Cd stress between indica and japonica is extremely important for rice improvement programs. Results: In this study, two varieties of rice, japonica 02428 and indica CH891, were continuously exposed to Cd and seedlings of the two varieties at two critical stages (3 rd and 5 th day) were selected for the dynamic genes analysis by transcriptome method. The results showed that CH891 was more sensitive to Cd than 02428, and a total of 7,204 and 6,670 differently expressed genes (DEGs) associated with Cd stress were detected at 3 rd day and 5 th day. Furthermore, we divided these DEGs into three categories: SCR (sensitive variety with Cd-responsive), RCR (resistant variety with Cd-responsive) and CCR (common Cd-responsive). The enriched metabolic pathways analysis of DEGs preferentially expressed in a stage-specic and cultivars-specic manner, and secondary metabolic processes were enriched in SCR while protein metabolism and plant hormone were enriched in RCR. The diverted metabolic pathways might be the major cause for different response mechanism of Cd in indica and japonica rice. Conclusion: These results provide a novel insight into the Cd response mechanism in rice seedlings between different varieties, and these important Cd-responsive DEGs were frequently involved in specic biological processes and metabolic pathways that might provide a novel insight over indica and japonica rice Cd response mechanism difference. Phenylpropanoid biosynthesis, glutathione metabolism, diterpenoid biosynthesis, brassinosteroid biosynthesis, alpha-linolenic acid metabolism and ABC transporters pathways were enriched in Cd5_CH891 vs. CK5_CH891. Protein processing in endoplasmic reticulum, porphyrin and chlorophyll metabolism, photosynthesis, phenylpropanoid biosynthesis and circadian rhythm-plant pathways were enriched in Cd5_02428 vs. CK5_02428.
the genetic and molecular basis of Cd stress between indica and japonica is extremely important for rice improvement programs.
Recently, many studies have used different rice cultivars to study the Cd stress [11], and several genes were identi ed to be associated with Cd response, such as P 1B -ATPase genes, natural resistanceassociated macrophage protein genes, cation diffusion facilitator genes, ATP-binding cassette transporter and other low-a nity cation transporter genes [12][13][14][15]. However, it was con rmed that genes encoding GST, heat shock protein and cytochrome P450 were also strongly induced under Cd stress [16]. Although functional analyses of these individual genes are helpful for understanding the regulatory mechanisms of Cd response in rice, the genetic basis of mechanisms for different subspecies is not clearly identi ed. Therefore, a comprehensive understanding of molecular mechanisms regulating seed development of different subspecies in Cd stress is required to facilitate the development of new insight into Cd response mechanism. Access to transcriptome and next generation RNA sequencing (RNA-seq) technology provides an opportunity to reveal genetic diversity among various genotypes/cultivars under Cd stress.
To identify genetic and molecular basis of Cd stress between indica and japonica, an elite japonica variety, 02428, and an elite indica variety, CH891, were continuously treated with Cd stress, and the seedling of the two varieties at two critical stages were selected for the analysis of the dynamic gene by transcriptome method. In total, we identi ed 7204 and 6584 DEGs at 3rd and 5th day. To investigate the different Cd response mechanisms in different cultivars and stages, we study enriched metabolic pathways of the DEGs and found that the response mechanisms preferentially were in a stage-speci c and cultivar-speci c manner. The results highlighted a new insight for difference in Cd tolerance between indica and japonica.
Plant materials
Two rice (oryza sativa L.) varieties were used in the study. Changhui 891 (CH891), excellent indica restore line in south China, generated from Key laboratory of crop genetics and breeding, Jiangxi Agricultural University, Jiangxi province, China, whose variety rights number in China Rice Data Center is CNA20161213.9 [17]. 02428 is a wide a nity japonica variety, selected and bred by Institute of genetic physiology, Jiangsu academy of agricultural sciences, Jiangsu province, China. It is a hybrid of radiation offspring of crab rice in Yunnan province and radiation offspring of Jibang rice in Shanghai [18], also being collected in the China Rice Data Center.
Plant growth conditions and treatments
Rice seedling were grown in culture dishes. Brie y, 30 seeds were selected by getting rid of shriveled and empty seed and surface-disinfected by immersing in 4% solution of sodium hypochlorite for twenty minutes and drenched for three times with distilled water. Thirty germinative seeds were placed in each dish, and the treatment groups (Cd) were treated with 100 mg/L CdCl 2 while the control groups (CK) with distilled water. Seedling were maintained in the incubator at 28℃ ± 2 for 16/8 h light/dark condition. Appropriate CdCl 2 solution and distilled water were added every day according to the requirement to avoid desiccation. Three biological replicates were arranged for each line.
Samples harvest and phenotype characterization
Seedling of two varieties (CH891 and 02428) were harvested and measured the seedling length of the population at the third day (Cd3 and CK3) and fth day (Cd5 and CK5). Seedlings were wrapped with tin foil paper, immediately frozenin liquid nitrogen, and stored at -80℃. Three biological replications and three technical replications were performed for the measurement of seedling length. mRNA library construction and sequencing Total RNA was extracted for three biological and technical replicates in each of 3rd and 5th day after Cd stress (DAS) and 3rd and 5th day after control (DAC) using Trizol reagent (Invitrogen, CA, USA) following the manufacturer's procedure. The total RNA quantity and purity were analysis of Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent, CA, USA) with RIN number > 7.0. Approximately 10 ug of total RNA representing a speci c adipose type was subjected to isolate Poly (A) mRNA with poly-T oligoattached magnetic beads (Invitrogen). Following puri cation, the mRNA is fragmented into smallpieces using divalent cations under elevated temperature. Then the cleaved RNA fragments werereverse-transcribed to create the nal cDNA library in accordance with the protocol for the mRNASeqsample preparation kit (Illumina, San Diego, USA), the average insert size for the paired-endlibraries was 300 bp (± 50 bp). And then we performed the paired-end sequencing on an IlluminaHiseq4000 at the (LC Sceiences,USA) following the vendor's recommended protocol.
Normalization of gene expression levels and identi cation of differentially expressed genes Sequencing reads were mapped to the reference sequences. The mapped reads of each sample were assembled using StringTie. Then, all transcriptomes from samples were merged to reconstruct a comprehensive transcriptome using perl scripts. After the nal transcriptome was generated, StringTie and edgeR was used to estimate the expression levels of all transcripts. StringTie was used to perform expression level for mRNAs by calculating fragments per kilobase of exon model per Million mapped reads (FPKM). The differentially expressed mRNAs and differentially expressed genes (DEGs) were selected with log2 (fold change) > 1 or log2 (fold change) <-1 and with statistical signi cance (p value < 0.05) by R package.
Quantitative real-time PCR (qRT-PCR) validation
In order to validate the RNA-seq results, different expression patterns of several genes was con rmed by quantitative real-time RT-PCR (qRT-PCR). For qRT-PCR, 1 µg of total RNA was used to synthesized cDNA using PrimeScript™ RT reagent Kit (Perfect Real Time) (TaKaRa). The qRT-PCR was carried out using SYBR® Premix Ex Taq II (Tli RNaseH Plus; TAKARA BIO Inc., Shiga, Japan) and determined in LightCycler 480 (Roche, Basel, Switzerland) according to the manufacturer's instructions. The qRT-PCR reactions were ampli ed for 95 •C for 30 s, followed by 40 cycles of 95•C for 5 s, 55•C for 30 s and 72•C for 30 s. All reactions were performed with three independent biological replicates for each sample and three technical replicates for each biological replicate were analyzed. The relative gene expression was calculated by the software of ABI7500 Real-Time PCR System using the 2 −∆∆Ct method.
Functional annotation and GO and KEGG classi cation
All expressed genes and signi cant genes were functional annotated against databases NR, KEGG, KOG, Pfam and Swiss prot, respectively. For the gene matched to multiple protein sequences, the protein with the highest similarity score was considered as the optimal annotation.
Phenotypic variation between CH891 and 02428 under Cd stress
Following exposure of seedlings of rice strains CH891 and 02428 to Cd, large phenotypic variations. The seedling lengths of CH891 and 02428 were shorter under Cd stress compared with the control condition ( Fig. 1A-D). Furthermore, the seedling length of CH891 was signi cantly longer than 02428 in control condition, while there was no signi cant differences between the two genotypes with Cd stress (Fig. 1E, F). It indicated that CH891 was more sensitive to Cd than 02428. The experiment carried out continuously for 7 days and found that the top two seedling length inhibition rate were on 4th and 6th day (Fig. 1G). It indicated that genes had higher expression in the 3rd and the 5th day than other days. Therefore, seedlings were harvested at the 3rd and 5th day and then sequenced RNAs.
RNA sequencing of seedling transcriptome of the two genotypes A total of 1,070 million reads with average of 46 million reads per sample were generated. While the total of valid reads is 1,057 million and the average valid read per sample is 46 million. The ratio of Q20 for each was above 99%, and the Q30 base percentage was above 95% (Supplementary Table S1). Therefore, the quality of the data was very high and meet the requirements for further analysis. To con rm the accuracy and reproducibility of the RNA-Seq results, 16 genes were selected for qRT-PCR using the speci c primers (Supplementary Table S2). The validation results for the 16 genes are shown in Fig. 5A, B. The qRT-PCR results were all consistent with the RNA-Seq data (Supplementary Table S2). To conclude our transcriptome sequencing results were credible.
Identi cation of differentially expressed genes (DEGs)
By comparing samples of the same rice cultivar in different conditions (control and stress) and different rice cultivar (CH891 and 02428) in the same condition at different stages, we assigned four comparison groups at the two stages and eight comparison groups in total were obtained. And then selected DEGs in different comparision respectively by restricting pval ≤ 0.05. It indicated that not only the cultivar but also the treatments and time points affect the gene expression level. The number of up-regulated and downregulated genes in each comparison group is shown in Fig. 3A.
Functional enrichment analysis was performed for all these DEGs ( Supplementary Fig. S1). KEGG enrichment analysis showed that plant-pathogen interaction, phenylpropanoid biosynthesis, tryptophan metabolism, avonoid biosynthesis and diterpenoid biosynthesis pathways were enriched in Cd3_CH891 vs. Cd3_02428. Plant-pathogen interaction, tryptophan metabolism, phenylpropanoid biosynthesis, avonoid biosynthesis, diterpenoid biosynthesis and amino sugar and nucleotide sugar metabolism pathways were enriched in CK3_CH891 vs. CK3_02428. Starch and sucrose metabolism, phenylpropanoid biosynthesis, glycolysis, carbon metabolism, alpha-linolenic acid metabolism, fatty acid degradation and brassinosteroid biosynthesis pathways were enriched in Cd3_CH891 vs. CK3_CH891. Plant-pathogen interaction and plant hormone signal transduction pathways were enriched in Cd3_02428 vs. CK3_02428. On the 5th day, phenylpropanoid biosynthesis, glutathione metabolism, avonoid biosynthesis, brassinosteroid biosynthesis, amino sugar and nucleotide sugar metabolism and ABC transporters pathways were enriched in Cd5_CH891 vs. Cd5_02428. Tryptophan metabolism, sesquiterpenoid and triterpenoid biosynthesis and phenylpropanoid biosynthesis pathways were enriched in CK5_CH891 vs.
All of the results revealed that there were more up-regulated DEGs under Cd stress than those under control condition in CH891, while there were more down-regulated DEGs under Cd stress than those under control condition in 02428 on either the 3rd or 5th day. Moreover, there was a higher number of DEGs obtained for Cd stress in CH891 (Cd-sensitive cultivar) than those in 02428 (Cd-resistant cultivar).
Classi cation of differentially expressed genes (DEGs)
A total of 7204 unique DEGs were detected in four groups on the 3rd day and 6670 unique DEGs obtained on the 5th day, these DEGs divided into 15 subgroups (Fig. 3B, C). Excluding these DEGs of the groups (CK3_CH891 vs. CK3_02428 at the third day and the groups contained CK5_CH891 vs. CK5_02428 at the fth day) irrelevant to Cd stress, the DEGs of the rest seven subgroups divided into three categories: genes from sensitive variety with Cd-responsive (SCR), genes from resistant variety with Cd-responsive (RCR), and common Cd-responsive (CCR) DEGs. There were 849, 676 and 770 DEGs in SCR, RCR and CCR at the 3rd day, respectively (Supplementary Table S3). Whereas 959, 592 and 1516 DEGs were in SCR, RCR and CCR at the 5th day, respectively (Supplementary Table S3).
In addition, we selected important Cd-responsive (ICR) genes in the three categories with |log2 (fold change)|≥2 and FPKM ≥ 2 in at least one group to select representative DEGs. At the 3rd day, there were 554 ICR genes were screened, among which 194 for SCR, 154 for RCR and 206 for CCR (Supplementary Table S3). Whereas a total of 941 ICR genes were screened at the 5th day, of these 389 for SCR, 216 for RCR and 336 for CCR (Supplementary Table S3). Subsequently, there were 17, 9 and 21 ICR genes were detected at the two stages in SCR, RCR and CCR, respectively (Supplementary Table S3).
Gene functional annotation analysis of SCR Gene functional annotation analysis was conducted for ICR of SCR. Cytoplasm and DNA binding were especially enriched in SCR3, oxidation-reduction process and protein binding were particularly enriched in SCR5. Transcription factor activity, sequence-speci c DNA binding regulation of transcription, DNAtemplated, integral component of membrane and plasma membrane were enriched in both SCR3 and SCR5 ( Supplementary Fig. S2). In order to gain more biological information and regulatory network, the Kyoto Encyclopedia of Genes and Genomes (KEEG) enrichment pathways was performed for understanding the molecular mechanism of Cd response in rice seedlings. KEGG analysis results showed that, iso avonoid biosynthesis, inositol phosphate metabolism, avonoid biosynthesis, and anthocyanin biosynthesis pathways were enriched in SCR3 ( Fig. 2A). While iso avonoid biosynthesis and anthocyanin biosynthesis pathways were signi cantly enriched. One DEG involved in anthocyanin biosynthesis was down-regulated while one was up-regulated in Cd3_CH891. And two DEGs involved in iso avonoid biosynthesis were up-regulated in Cd3_CH891. Phenylpropanoid biosynthesis, alpha-linolenic acid metabolism, linoleic acid metabolism, glutathione metabolism, anthocyanin biosynthesis, ABC transporters, brassinosteroid biosynthesis and diterpenoid biosynthesis pathways were enriched in SCR5 (Fig. 2B) Transcription factor activity, sequence-speci c DNA binding protein binding, regulation of transcription and DNA-templated were enriched in both RCR3 and RCR5 (Supplementary Fig. S2).
In terms of KEGG pathway enrichment analysis, ribosome, regulation of autophagy and insulin resistance pathways were enriched in ICR of RCR3 (Fig. 2C). Among which ribosome and regulation of autophagy pathways were most enriched. One DEG involved in regulation of autophagy was down-regulated and the other two were up-regulated in Cd3_02428. One DEG involved in ribosome was up-regulated and six were down-regulated in Cd3_02428. Protein processing in endoplasmic reticulum, plant hormone signal transduction, thiamine metabolism, glucosinolate biosynthesis and base excision repair pathways were enriched in ICR of RCR5 (Fig. 2D). And protein processing in endoplasmic reticulum and plant hormone signal transduction pathways were most enriched. Four DEGs involved in protein processing in endoplasmic reticulum were up-regulated and other four were down-regulated in Cd5_02428. One DEG involved in plant hormone signal transduction was up-regulated and the other ten were down-regulated in Cd5_02428.
Gene functional annotation analysis of CCR As far as ICR of CCR, chloroplast and zinc ion binding were enriched on the third day while ATP binding was enriched in the fth day. And protein binding regulation of transcription, DNA-templated, cytoplasm, plasma membrane, integral component of membrane, transcription factor activity, sequence-speci c DNA binding and DNA binding were enriched in both CCR3 and CCR5 (Supplementary Fig. S2).
In terms of KEGG pathway analysis, alpha-linolenic acid metabolism, phagosome, limonene and pinene degradation and fatty acid degradation pathways were enriched in CCR3 (Fig. 2E). Moreover, alphalinolenic acid metabolism and fatty acid degradation pathways were markedly enriched in CCR3. One DEG involved in alpha-linolenic acid metabolism was down-regulated and two DEGs were up-regulated. And one DEG involved in fatty acid degradation was up-regulated while the other one was downregulated. Plant-pathogen interaction, iso avonoid biosynthesis, vitamin B6 metabolism, diterpenoid biosynthesis, and glutathione (GSH) metabolism pathways were enriched in the fth day (Fig. 2F).
Among which plant-pathogen interaction, iso avonoid biosynthesis, and glutathione (GSH) metabolism pathways were observably enriched. One DEG involved in iso avonoid biosynthesis was down-regulated, two were up-regulated. Two DEGs involved in glutathione metabolism were down-regulated, three were up-regulated. Seven DEGs involved in plant − pathogen interaction were down-regulated and twenty were up-regulated.
To understand the interaction of all the enriched GO terms at all the stages of SCR, RCR and CCR in the general, we constructed a network of signi cantly enriched GO terms (biological process) (Fig. 4). It showed that the biological process was focus on transport, response to different stimulus, metabolic of hormone, gene express, cellular activity, and secondary metabolic process. It suggested that different stress reactions were activated to respond to Cd stress.
DEGs both in 3rd and 5th day after stress (DAS)
In total, we obtained 46 common ICR in three categories at two stages and they were involved in several different pathways (Fig. 5C, D, E). Except for the pathways listed above, some other pathways were detected, such as inositol phosphate metabolism, cutin, suberine, and wax biosynthesis, biosynthesis of amino acids, starch and sucrose metabolism, ascorbate and aldarate metabolism, base excision repair, aminoacyl-tRNA biosynthesis, galactose metabolism, and circadian rhythm-plant. It indicated that not only the enriched pathways above but also insigni cantly enriched pathways had interaction to regulate Cd stress in rice.
Genetic basis of Cd stress in rice
The Cd stress extremely affects rice growth and development [19][20], such as photosynthesis, transpiration and other physiological processes of plants and produces excessive oxygen free radicals. In recent years, Cd has fascinated great attention due to its harmful effects on plant productivity. Several studies have shown that different cultivars showed different responses under Cd stress [11,21]. However, the regulation mechanism between indica and japonica under Cd stress have not yet been reported. In the present study, we evaluated an elite japonica variety, 02428 (Cd sensitive cultivar), and one elite indica variety, CH891 (Cd resistant cultivar), by exposing seedling were continuously treated with Cd solution 100 mg/L for one week, seedling of the two varieties at two critical stages, when it determines the length of the seedling of the two varieties, were selected for the dynamic genes analysis under Cd stress by transcriptome method. The results showed that the metabolic pathways between 3rd DAS and 5th DAS were different not only for 02428 but also for CH891, implying that distinct genetic systems might be responsible for Cd stress at different stages in rice, a thorough and dynamic gene analysis for rice Cd stress was necessary. Otherwise, the metabolic pathways between CH891 and 02428 were also different at the same stage (3rd or 5th DAS), these metabolic pathways might be the cause that indica and japonica showed different response to Cd stress, therefore, the exploitation genes in these metabolic pathways might fully explain the differences in Cd stress between indica and japonica. All, the DEGs analysis showed that the difference in metabolic pathway not only exists in different genotypes but also in different stages.
The stage speci c expression of DEGs between indica and japonica KEGG pathway analysis was performed for all DEGs with signi cant differences. It showed that the enriched pathways of all DEGs are not signi cantly different from those of the following three categories (SCR, RCR, CCR), except for some amino sugar metabolism, amino acid metabolism, sugar metabolism and nucleotide sugar metabolism. In general, metabolic pathways of all DEGs mainly focus on basic life activities, secondary metabolic processes, plant-pathogen interaction and plant hormone signal transduction pathways. This suggests that the following analysis would not be too disturbed by the background.
To give more detailed information for the genes response to Cd stress in rice, we divided DEGs into three categories (SCR, RCR, and CCR) at the two critical stages. The enriched metabolic pathways analysis of DEGs indicated that these DEGs preferentially expressed in a stage-speci c and cultivars-speci c manner, some signi cant metabolic pathways were identi ed for SCR, RCR and CCR.
Expressed Gene special in SCR The iso avonoid biosynthesis and anthocyanin biosynthesis pathways were enriched in SCR3. Iso avones were found to play an important role in plant defense reactions [22]. Under Cd stress, the cell wall of rice was damaged, which made it susceptible to pathogen infection and then induced the expression of iso avones to improve the disease resistance in plants [23]. Previous studies showed that iso avones are usually abundant in legumes but low in other plants [24]. Combined with our result, it may provide new insight into the response of rice to Cd stress at the iso avone level.
Moreover, Anthocyanin is a kind of antioxidant, which can scavenge oxygen free radicals in plant cells and relieve the toxicity of reactive oxygen [25]. Roychoudhury reported that anthocyanin biosynthesis was induced under CdCl 2 [11], which was consistent with our result. It suggested that anthocyanin, as an antioxidant, relates to Cd stress response in rice and may relieve the toxicity of Cd.
Phenylpropanoids biosynthesis was one of the enriched KEGG pathways in SCR5. Phenylpropanoids participated in the antioxidant activity of cell walls and the biosynthesis of lignin, which plays an important role in the response of plants to abiotic stress [26]. Several reserchers have found that phenylpropanoids metabolite was signi cantly altered by Cd stress in rice [27] and Arabidopsis helleri [28]. Our results con rmed that phenylpropanoids contribute to Cd response in rice. Besides, another enriched pathway in SCR5 was ABC transporters pathway, which was metabolic pathway authenticated to be important under Cd in rice [29]. Using the energy released by hydrolyzed ATP, ABC transporters can actively transport a variety of heavy metals through the cytomembrane, which is associated with plant stress tolerance and detoxi cation of heavy metals [30]. However, the result showed that ABC transporters was found to be enriched in SCR5 may put forward that ABC transporters participate in Cd response in rice and have possibility to inhibit Cd stress. From our results and previous study, ABC transporters actually play important role in response to Cd stress. Diterpenoids, as secondary metabolites, was reported to inhibit intracellular ROS production and lipid peroxidation, and then enhanced the antioxidant defense system in Sideritis [31]. Beside, our study found that diterpenoid biosynthesis pathway was enriched in SCR5, it suggested that diterpenoid metabolism also be induced by Cd in rice and plays an important role in response to Cd stress in rice. Brassinosteroid (BR) has a wide range of biological functions and plays an important role in the adaptability of plants under stress. Bajguz and Hayat found that external application of BR can improve the salt tolerance of plants [32]. Our result indicated that BR was also related to Cd response in rice, and DEGs involved in this pathway were all related to cytochrome P450 and were up-regulated, which is consistent with the fact that cytochrome P450 participates in detoxi cation metabolism [16].
Expressed Gene special in RCR
In terms of RCR, the KEGG pathways were also different at 3rd and 5th DAS. Ribosome-related DEGs play key roles in cold stress signal transduction [33]. Therefore, ribosome pathway was identi ed to be enriched in this study. In addition to ribosome, the regulation of autophagy was another enriched pathway in RCR3. Autophagy can transport oxidative proteins and injured intracellular substances caused by stress into vacuoles for degradation, thereby reducing the accumulation of toxic substances in the cytoplasm [34]. It plays an important role in plant development and response to stress [35]. Thus, our study found that regulation of autophagy pathway was induced in RCR3 further veri ed that it played a role in Cd stress in rice to a certain extent, and these two pathways were newly identi ed in rice under Cd stress here, and which identi ed only at 3rd DAS.
As far as RCR5, plant hormone signal transduction was enriched. When in the unfavorable environment, plants must respond quickly and accurately to activate the necessary physiological responses which are usually mediated by plant hormones [36]. Phytohormones have been demonstrated to play an important role in mediating plant growth plasticity in response to metal stress include Cd [37]. Consistently, we detected plant hormone signaling transduction pathway in RCR5. Our result was consistent with the previous study in rice, however, it was only identi ed at RCR5, and protein processing in endoplasmic reticulum pathway was also enriched in RCR5. Cellular exposure to Cd is known as strongly induce the unfolded protein response, which suggests that the endoplasmic reticulum (ER) is preferentially damaged by Cd in yeast [38]. In our study, it was newly identi ed in rice under Cd stress. It suggested that Cd can lead to protein alteration among ER in rice.
Expressed Gene special in CCR
Differences also existed in CCR in the 3rd and 5th DAS. Alpha-linolenic acid metabolism and fatty acid degradation can be summarized as fatty acid metabolism, which was related to Cd response in CCR3. Previous studies revealed that heavy metal can speci cally alter fatty acid metabolism in plants, Cd can enhance the level of lipid peroxides and lead to strong changes in fatty acid content in rice [39][40]. Fatty acid metabolism was found to be induced by heavy metal Cr in rice [41]. In this study, our ndings provided further evidence that fatty acid metabolism is a stress response to Cd stress in rice. As far as the KEGG pathway analysis in CCR5, plant-pathogen interaction, GSH metabolism and iso avonoid biosynthesis pathway were enriched. There is a very complex interaction between the endogenous bacteria or fungi in the plant. These microorganisms can produce a variety of biological effects in plants, they can form a special growth state to better adapt to the environment [42]. Several previous studies have found that plant pathogen interaction is induced by plant pathogen invasion [43]. The nding of this study puts forward the possibility that Cd damages the function and structure of cells and provides the conditions for pathogen attack. Many studies have reported that GSH alleviates heavy metal toxicity in rice [28,41]. However, the binding of GSH to heavy metal was catalyzed by glutathione S-transferase (GST), which is an important class of antioxidant and detoxi cation proteins in plant cells. In this study among the DEGs involved in GSH metabolism, all of the six were GST genes. It suggested that GST make a difference in GSH metabolism, which participates in Cd response in rice. Also, similar to pathway analysis in SCR3, iso avonoid biosynthesis pathway reacted on Cd stress in CCR5. It implied that iso avonoid biosynthesis pathway generally exist in Cd response in rice.
It can be observed that not only the response mechanisms between 3rd and 5th DAS in the same category but also mechanisms between different categories were different. Enriched pathways in SCR were associated with secondary metabolic processes while protein metabolism and plant hormone signal transduction were enriched in RCR. It indicated that CH891 might respond to Cd by inducing secondary metabolic processes and then can develop its immunity and resistance to adversity. Differently, the process of protein production and degradation was affected in 02428, so as to protect itself from Cd. It may prove that the response mechanism of Cd in indica and japonica rice was different in one way.
The common DEGs between 3rd and 5th DAS in indica and japonica
With the exception of metabolic pathways discussed above, the common DEGs in three categories at two stages were detected, which were enriched in various pathways (Fig. 4C, D, E). Some pathways were discussed above and others were not signi cantly enriched in this study, which may involve the minor genes and conspire to Cd response in rice. Of these, DEGs, LOC_Os03g57240 (DST), a common DEG in RCR at 3rd and 5th DAS, was reported to be a zinc nger transcription factor and relate to DNA replication, which negatively regulates drought and salt tolerance of rice [44]. Therefore, it was detected in this study, which implied DST regulated Cd stress in rice. We selected four genes to conduct qRT-PCR, and they were veri ed compared with RNA-Seq (Fig. 2B).
Conclusions
In this study, high-throughput sequencing was used to sequence the rice seedling transcriptome treated with Cd at different stages, which highlighted the transcriptional variations among two different rice varieties under Cd conditions. Statistical analysis of 7204 and 6670 DEGs revealed three categories for a total of 554 and 941 ICR DEGs in rice at two stages, respectively. Furthermore, these important Cdresponsive DEGs were frequently involved in speci c biological processes and metabolic pathways that The comparison of RNA-seq results and qRT-PCR analysis of gene expression levels and the KEGG pathways of common ICR in SCR, RCR, CCR at two stages. (A) LOC_Os03g08470, LOC_Os04g39880 and LOC_Os09g36700 were in Cd3_CH891 vs. CK3_CH891, LOC_Os10g40720 was in Cd3_02428 vs. | 2020-10-28T18:32:02.269Z | 2020-09-08T00:00:00.000 | {
"year": 2020,
"sha1": "69e24d29249028873bfdea564a2b383949e09677",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-67623/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e14b3124d70170ffccefd63a714b33604c98ede9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
58470451 | pes2o/s2orc | v3-fos-license | Exploring Brow Position Changes with Age in Koreans
Purpose Several studies have described age-associated brow drooping in Westerners. However, there are few studies that address brow drooping in the Asian population, and especially in the Korean population. Therefore, we studied brow position changes with age in Korean individuals. Methods A total of 300 adults older than 18 years were enrolled. The ImageJ program was used to analyze digital photos of the patients by measuring the following parameters: marginal reflex distance-1, brow-to-pupil distance, nasal ala-lateral brow distance, lateral brow plumb line, and the angle formed by the line from the mid pupil to the midline of the brow and a line from the midline of the brow to the lateral brow. We divided the patients into three groups (18 to 40, 41 to 60, older than 61) and compared them using the ANOVA test. Results Group A included 100 patients between 18 and 40 years of age. Group B included 100 patients between 41 and 60 years of age. Group C included 100 patients older than 61 years. There were significant differences between groups A and C and between groups B and C with regard to marginal reflex distance-1, brow-to-pupil distance and the angle. Lateral brow plumb line showed significant difference only between groups A and B. Nasal ala-lateral brow distance was not significantly different across the three groups. Conclusions We sought to describe the physiologic facial changes that occur in Korean individuals. We also hoped to establish guidelines for ptosis corrective surgery. We used various parameters to characterize the aging process in Asians. Our data demonstrated that, like Westerners, Koreans experience lateral brow drooping with age; however, this change was only significant in the group aged >61 years.
Changes in brow position influence one's expression of certain negative emotions (such as decreased vitality, tiredness, or grief), as well as facial beauty. Therefore, there is interest in understanding the changes that typically occur with age and in identifying the ideal brow position [1]. The ideal brow position changes; modern Korean women prefer an arch-shaped brow from the perpendicular line passing from the ipsilateral nostril to the line passing through the lateral canthus, ala of nose, and the center of pupil [2,3]. The aging process typically starts in the upper face before the lower face. Ptosis of the lateral brow is one of the earliest changes with time [4]. In addition, the eyelid skin thins and the eyelid fat shrinks as one ages.
Previous studies have found that the lateral brow descends with age in Western populations, but there are few similar studies in Asian populations [5]. The findings from Western studies cannot necessarily be generalized to Asian populations [6]. Therefore, we investigated brow position in Koreans and the changes associated with age.
Materials and Methods
A total of 300 patients over 17 years old who had undergone facial photographs at our clinic between August 2011 and February 2013 were included. This study was approved by the institutional review board of Dong-A University Hospital (DAUHIRB-17-067). Informed consent was obtained from each patient. Patients were excluded if they had a history of facial trauma, strabismus, congenital anomalies, cosmetic surgery, or thick brow tattoos. Patients were also excluded if they had health factors that potentially affect the ocular position, such as hyperthyroidism. Facial photographs were captured using an EOS500 digital camera (Canon, Tokyo, Japan). A 10-mm-diameter red circle was attached to the patients' forehead for these photographs. The following five parameters were obtained using the ImageJ ver. 1.50i (National Institutes of Health, Bethesda, MD, USA): marginal reflex distance-1 (MRD1); brow to pupil distance (BPD); nasal ala-lateral brow distance (NALB); the vertical line extending from the tail of the brow and a horizontal line extending from the lateral canthus (lateral brow plumb line, LBPL); and the angle between the BPD and the line from the upper end point of the BPD to the lateral end of the brow (Angle). These parameters were measured retrospectively on the basis of the red circle (Fig. 1).
The patients were classified into groups A (18 to 40 years old), B (41 to 60 years old), and C (over 60 years old) according to age. The mean ages of patients by group are as follows: 46 for men and 54 for women in group A, 56 for men and 44 for women in group B, and 60 for men and 40 for women in group C. There were no significant differences in sexual distribution across the three groups. The measured parameters were analyzed using ANOVA and the Scheffe test. Statistical analysis was performed using SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA).
Results
The average MRD1 was 2.99 ± 1.52 mm in group A, 3.15 ± 1.53 mm in group B, and 2.14 ± 1.31 mm in group C. There were significant differences in MRD1 between groups A and C (p < 0.001) and between groups B and C (p < 0.001). In contrast, there was no significant difference between groups A and B ( p = 0.756).
The average BPD was 23.11 ± 3.28 mm in group A, 23.26 ± 3.69 mm in group B, and 26.67 ± 9.87 mm in group C. There were significant differences in BPD between groups A and C ( p = 0.001) and between groups B and C ( p = 0.001). In contrast, there was no difference between groups A and B ( p = 0.987).
The average NALB was 79.33 ± 6.96 mm in group A, 78.81 ± 7.63 mm in group B, and 78.67 ± 8.52 mm in group C. There was no significant difference in NALB among the three groups: groups A and B ( p = 0.893), groups A and C ( p = 0.833), and groups B and C ( p = 0.992).
The average LBPL was 18.06 ± 3.12 mm in group A, 20.76 ± 10.31 mm in group B, and 19.35 ± 5.64 mm in group C. There was a significant difference in LBPL only between groups A and B ( p = 0.027). In contrast, groups C and A and groups C and B had no significant differences in LBPL ( p = 0.437 and p = 0.362, respectively).
The average of Angle was 75.69 ± 10.97° in group A, 74.79 ± 5.83° in group B, and 68.27 ± 8.09° in group C. Groups C and A (p < 0.001) and groups C and B (p < 0.001) had significant differences in Angle. However, there was no significant difference between groups A and B ( p = 0.760). There was no gender disparity in any of the groups.
Discussion
The accurate evaluation of brow position and age-related changes is important for choosing the proper surgical methods [7]. We analyzed various parameters in Asians to reaffirm the characteristics of the aging process in Western people. Sclafani and Jung [8] reported that the brow tail was higher in women than in men; however, the disparity between genders was not found to be statistically significant in our study. The MRD1 and Angle were also not statistically different. However, group C demonstrated a significant decrease compared to those of groups A and B. Two factors that contribute to a descending brow position are a rapid decrease in tissue elasticity and changes in the bony tissue of the superior orbit rim [9]. These changes, which may accelerate after 60 years of age, can explain the significant decreases in MRD1 and Angle in group C. Park et al. [10] reported that MRD1 is highest between ages 30 and 34 and lowest over 60. The group also found that MRD1 decreased with age, with greater changes in men than in women. Lee et al. [11] analyzed the MRD1 of 432 participants and found that there was a significant drop after the age of 60. This result is comparable to our findings. Seo and Ahn [12] also depicted several meaningful changes that occur after the seventh decade, including the increase of upper eyelid tissue, brow ptosis, and an increase in lateral hood width of the eyelid. Although BPD tended to increase with age, there was no significant difference between groups A and B. However, group C had significantly different BPD changes compared to those of groups A and B. Moon et al. [13] found that BPD decreased at the third and fourth decades and increased after the fifth decade. van den Bosch et al. [14] explained that these changes may be caused by age-related interruption of the levator muscle aponeurosis and to involutional atrophy of the orbital fat. It may also be that it occurs as the eyebrows shift upward. Although Glass et al. [4] reported that NALB, LBPL and Angle decreased with age in Western participants, NALB showed no significant difference among the groups, and LBPL was only significantly different between groups A and B. These findings suggest that Asian individuals may have less elastic tissue than Western people, resulting in a less dramatic lateral brow droop with age. In addition, while the angle decreased significantly, there were no significant differences in NALB across the groups. These results suggest that the baseline amount of NALB was too large to ref lect the minute amount of brow ptosis. The distance between the lateral canthus and the lateral brow might be a better parameter than NALB. The LBPL also showed no significant differences across groups, except between groups A and B. We suppose that drooping of the lateral canthus may increase the LBPL between groups A and B. In addition, we suspect that acceleration of lateral brow ptosis with age will decrease the LBPL. Angle may ref lect not only lateral brow ptosis, but also brow shape. These differences in brow shape vary with racial differences and may explain the discrepancy of results between LBPL and Angle. In addition, because the BPD of group C was significantly higher than that of group A or B, Angle demonstrates significant differences among groups, unlike LBPL. A previous study of Western people revealed an annual decrease of 0.01 mm in MRD1, 0.03 mm of BPD, and 0.22° of Angle [4]. In contrast, we did not find gradually decreases in these parameters; in contrast, there was an increase in the BPD with age. One must also consider the possibility of an upward brow shift, which can create lateral brow drooping, although unrelated to increasing BPD or decreasing Angle.
This study has several limitations. This was a cross-sectional study between age-related groups. We did not analyze the brow position changes according to age in each participant. Therefore, a long-term prospective analysis of brow position changes with age in each participant is needed for more meaningful results. We randomly selected 100 patients for each group to reduce potential errors. In addition, a larger study with more participants and greater age subdivisions may allow for a more detailed analysis.
In conclusion, the Korean brow position changes with age, but these changes are different from those of Westerners. The BPD increased, while Angle tended to decrease with age in Korean individuals. In contrast, NALB did not change significantly with age. We studied facial aging based on multiple parameters. Our data are important with regard to improvements in eyelid surgery.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2019-01-18T14:14:22.752Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "4633c2bbb536b55db12357cf41fc35a79993dc9f",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc6372380?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4633c2bbb536b55db12357cf41fc35a79993dc9f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269532064 | pes2o/s2orc | v3-fos-license | Changes in parents’ health concerns by post-preterm birth period in South Korea: a cross-sectional study
this study offer insights into the health concerns among parents of premature infants. Parental health concerns about premature infants vary over time, from before birth to post-discharge, necessitating supportive interventions to enhance parental understanding of their child’s health status
www.e-chnr.orgquently after birth are well-documented [6].The risks for preterm infants do not end after hospitalization, and they continue to have high vulnerability even after discharge.The care they receive at home has a crucial impact on their well-being in the long term [7].
Therefore, parents of premature infants can have negative thoughts and fears [8,9], and numerous articles about their emotional stress have been published.In addition, many studies have been published about parental stress caused by the medical conditions of their infants, the prematurity of their infants [10][11][12], and a lack of information about the diagnosis or treatment of their infants [12].Therefore, many studies have been conducted on the emotional experiences of the parents of infants hospitalized in the neonatal intensive care unit (NICU).Their highest stressor was the parental role, and parental stress was influenced by the length of the NICU stay, gestational age, infant's need for respiratory support, and cardiovascular diagnosis [8,10].Parents experienced difficulties in coping with hospitalization stress, grief, isolation, and a lack of preparation for the transition to parenthood [13].Therefore, it is necessary to identify parental risk factors immediately after preterm birth to assist in reducing parents' negative emotions and stress [8], and parents need to be prepared for the transition of childcare from the hospital to home.The process of preparing parents for preterm infant care should be gradual and dynamic, starting from admission.Through this preparation, parents acquire the knowledge and skills needed to care for preterm infants at home, as well as emotional stability and a sense of security.The preparation process for transitioning from the NICU to home should be tailored to the individual social circumstances of the parents, and the specific conditions and needs of each preterm infant.The child's clinical progression and the parents' adaptation to the situation should also be considered [14].Parents of preterm infants in Korea have supportive care needs, including a vague fear of caring for a baby upon imminent NICU discharge, real-world difficulties encountered while caring for preterm children, concerns about growth and development problems, and anxiety about possible complications [15].
However, there has been little research on the health problems of the greatest parental concern in their premature infants and whether these concerns change over time in South Korea.The purpose of this study was to identify health-related concerns among the parents of preterm infants, changes in their parental concerns over time, and the factors that af-fected them during the perinatal period and after discharge.
In addition, based on the results of this study, basic data for developing supportive care interventions according to parents' concerns that change from the birth of a premature baby to after discharge from the hospital was provided.
METHODS
Ethical statements: This study was approved by the Institutional Review Board (IRB) of Kosin University Gospel Hospital (No.
Study Design
This was a retrospective study [16] in which data were ob-
Study Setting and Participants
The researchers intended to record the medical aspects of parents' concerns about their infants in order to help support parenting interventions while performing long-term follow-up management of preterm infants when they return to the hospital for their first outpatient visit after discharge from the NICU.As established by a previous study [18], the researchers interviewed parents of premature infants in the outpatient department (OPD) about concerns regarding neonatal problems related to preterm birth and let them select one answer (Supplement 1. Questionnaire).Data obtained longitudinally were analyzed to extract data cross sectionally.
In this study, 177 premature infants who visited the neonatal clinic of the Department of Pediatrics between December 1, 2018, and October 31, 2021, were investigated.These infants with < 37 weeks of gestation were born at the hospital, hospitalized, and discharged from the NICU.The hospital is a level III unit with 18 NICU beds.Information about parents' health concerns regarding their premature infants was recorded in the OPD chart.Parents who were not asked for www.e-chnr.orginformation about their concerns regarding the health of their preterm infants (n = 39), preterm infants who were transferred from other hospitals (n = 17), and infants who died (n = 2) were excluded from this study.No infants with congenital anomalies were enrolled.Finally, 119 premature infants were included in this study, and information was gathered from a total of 176 parents, as 57 out of the 119 premature infants had both the father and mother responding.
Variables 1) Clinical characteristics of premature infants
The clinical characteristics of premature infants included gestational age, birth weight, sex, delivery type, 1-and 5-minute Apgar scores, the first baby or not, total length of hospital stay, need for resuscitation and blood transfusion, disease, and central catheter insertion.
2) Maternal characteristics
The maternal characteristics included maternal age and maternal pregnancy complications (e.g., pregnancy-induced hypertension, gestational diabetes mellitus, hyperthyroidism, and hypothyroidism) were investigated.
3) Parents' greatest concerns regarding their premature infants' health
The questionnaire in this study consisted of 1 item to investigate either the father or mother and 5 items regarding the greatest concerns with their infants' health at five different times: before birth, immediately after birth, 7 days after birth, before hospital discharge, and 1 week after hospital discharge to home (Supplement 1. Questionnaire).
Data Collection
This study was conducted after receiving approval for the research plan and a waiver of informed consent from the institutional review board (IRB) of the university hospital.The electronic medical records (EMRs) of the patients were collected as anonymous data with the cooperation of the medical information management office from December 1, 2018, to October 31, 2021.Neonatal problems associated with premature birth are classified in pediatric textbooks by organ system as follows: respiratory, cardiovascular, hematologic, gastrointestinal (GI), metabolic endocrine, central nervous system (CNS), renal, and infection [6].Common problems in premature infants arise in these systems and are explained to the parents on the day of neonatal admission to the NICU.
Both during hospitalization and upon discharge from the NICU, parents were provided with documents summarizing the medical conditions of their infants and any consequences that occurred during hospitalization.During outpatient visits, diseases of prematurity that had been explained during hospitalization and upon discharge were fully explained again, and the parents' concerns about their infants' health were confirmed in the outpatient clinic during follow-up and management of the premature infants.Based on a previous study during which Korean mothers of premature infants were interviewed regarding emotional adjustment and concerns at five distinct points in time from immediately after NICU hospitalization to infant discharge [9], parents were also asked about their greatest health concerns regarding their infants at different times, including before birth, immediately after birth, 7 days after birth, before hospital discharge, and 1 week after hospital discharge to home [18], and their answers were recorded on the outpatient medical records forms.We investigated these data in this study because parental responses and stress are influenced by obstetric and infant characteristics [19].
Data Analysis
The investigated items were entered into Microsoft Excel (Microsoft Corp.) for management and IBM SPSS Statistics version 25.0 (IBM Corp.) for the comparison of continuous variables with the t test and of categorical variables with the chi-squared test [20].All test results with p < .05were considered statistically significant.
Parental Concerns about Prematurity-related Diseases
The analysis of parents' concerns about their premature www.e-chnr.orginfants revealed that most parents were concerned about their respiratory system throughout the entire period from before birth to 1 week after NICU discharge (Figure 1).
Throughout the entire period, it can be seen in Figure 1 that parents showed the most concern about the respiratory system after birth, with responses from 77 mothers (77/123, 62.6%) and 46 fathers (46/123, 37.4%).In addition, the CNS was the second most concerning system from before birth until before discharge and statistically significant differences were observed before and after birth.Furthermore, during the first week after hospital discharge, the second most concerning system shifted to the GI system from the CNS, and the degree of concern regarding the GI system during the first week after hospital discharge significantly increased when compared to respectively different points in time, such as before birth, after birth, seven days after birth, and before discharge (Figure 1).
Differences in Infants and Maternal Characteristics Between the Greatest and Second-greatest Concerns of Parents
Table 2 presents the data and results for the greatest and second-greatest concerns of parents about their premature infants according to time.The premature infants of parents with respiratory system concerns after birth had younger gestational ages; moreover, a substantial percentage of these infants had a birth weight of < 1,500 g (Table 2).Respiratory system concerns significantly increased after birth compared to those before birth (98 vs. 123 respondents, respectively; p < .001)(Figure 1).The only related factor that was associated with respiratory system concerns pre-birth was cesarean delivery; however, cesarean delivery was a statistically significant factor associated with respiratory system concerns during the entire study period (Table 2).The average 1-and 5-minute Apgar scores of the infants of parents with respiratory system concerns were also considerably lower than those of infants whose parents had different concerns (Table 2).The parents of infants with a diagnosis of RDS and the parents of infants with a diagnosis of BPD before discharge had significant concerns about the respiratory system.Moreover, parental concerns about the respiratory system were significantly affected by transfusion requirements after birth, longer phototherapy within 7 days after birth, longer antibiotic treatment during the hospital stay before discharge, and longer hospital duration ( www.e-chnr.orgwww.e-chnr.orgConcerns about the CNS system were the second-most common type from before birth to before hospital discharge.Concerns about the CNS decreased after birth compared to those reported before birth (25 vs. 15 respondents, respectively; p < .001)(Figure 1).However, concerns about the CNS subsequently increased again to a level similar to that seen immediately before birth and before hospital discharge (Figure 1).Most of the parents of infants with a diagnosis of IVH or PVL before discharge also had significant CNS concerns (Table 2).Parents of infants with a gestational age of less than 32 weeks had the second-greatest concerns about the CNS before birth and at seven days after birth.
Table 3 shows that concern about the GI system was the second-most common at 1 week after discharge from the NICU and was compared to other organ systems.These concerns significantly increased after discharge compared to those before birth, after birth, 7 days after birth, and before discharge (37 vs. 10, 13, 22, and 21 respondents; p < .05)(Fig- ure 1).As a result of analyzing the factors that increased concerns about GI problems during the post-hospital period, the gestational ages of the infants of parents with GI concerns (33.90 ± 2.48 weeks) were significantly increased compared to those of the infants of parents who had other concerns (32.78 ± 2.98 weeks; p = .039)(Table 3).Other significant factors included birth weights of ≥ 1,500 g (p = .021)and the absence of diagnoses of RDS (p = .030),ROP (p = .014),or IVH (p = .007)(Table 3).The duration of antibiotic treatment for infants whose parents had concerns about their GI systems was shorter than that of infants whose parents had concerns about other organ systems.
DISCUSSION
The parents of premature infants in this study were the most concerned about the respiratory system at all times from before birth to 1 week after NICU discharge to home, which was similar to the results of a previous study [18].This concern was probably the greatest because proper breathing is the most crucial aspect for life after birth [21].Moreover, no differences in responses existed between mothers and fathers in this study.Unlike the differences found between mothers and fathers in terms of emotional stress experienced by the parents of premature infants [7,15], it was evident that mothers and fathers shared the same medical health concerns during the current study.The existence of this similarity is plausible because it is generally well-known that proper breathing is the most important aspect of life after the birth of a premature infant [22].Although cesarean delivery was one of the risk factors for respiratory system concerns throughout the study period, it was the only statistically significant risk factor for the existence of concerns regarding the respiratory system during the prepartum period.It is a commonly accepted fact that, compared with vaginal delivery, cesarean delivery is associated with greater risks for diseases related to respiration in newborns [23].Furthermore, this is also due to the fact that cesarean delivery is the only factor among those considered that the general public can easily explore and understand through websites, books, or television programs that provide information about pregnancy and childbirth.Respiratory system concerns were also more common after birth among parents whose premature infants had birth weights < 1,500 g or younger gestational ages.
There was a significant increase in concerns compared to those present before birth because newborns with younger gestational ages at birth or lower birth weights are more likely to experience respiratory problems.In addition, there were clearly other common characteristics of preterm infants whose parents were concerned about respiratory problems, including lower 1-and 5-minute Apgar scores, a diagnosis of RDS, a diagnosis of BPD, and the need for blood transfusions.This result suggested that parents of premature infants with low gestational age or low birth weight were concerned about the respiratory system because their infants had experienced RDS immediately after birth, or had sometimes been diagnosed with BPD.
In this study, the second-most common concern was CNS disorders from before birth to before hospital discharge.In www.e-chnr.orgprevious studies [18], the second-most common parental concerns were focused on the CNS, cardiovascular system, GI system, and infection, and these varied according to time (with no significant differences among times).In the present study, the mean gestational age and mean birth weight were lower (33.0 ± 2.9 weeks and 1,991.5 ± 570.8 g) than the values reported by previous investigators (33.1 ± 3.0 weeks and 2,124.4± 685.1 g) [13].Overall, the premature infants of parents who had CNS concerns had the lowest gestational ages; however, this trend was not statistically significant, and it was not affected by birth weight.Only the parents of premature infants diagnosed with brain abnormalities, such as IVH or PVL had statistically significant increases in CNS concerns.This result could be due to the bias caused by intervention by the medical staff because parental interviews were conducted due to the presence of brain abnormalities.However, this could also have occurred because the mothers were focused on the current conditions and future development of their infants [9,24], and because concerns about CNS disorders are naturally common among parents of premature infants.In addition, mothers who are aware of serious problems in their premature infants continue to experience guilt and worry about the health of their infants [9].Concerns about CNS development are inevitable because of the longterm consequences.
When the gestational age was older and the infants had no disease specifically attributable to prematurity, concerns about the GI system were the second-most common health concerns among parents at 1 week after NICU discharge.
This trend might have existed because changes in the health concerns of parents were similar to the changes in their emotions after hospital discharge [9,25,26].When infants are in objectively good health, parents feed them directly and witness their development at home after discharge, unlike in the hospital.It was also evident that GI system concerns among parents increased as infants spent more of their time consuming formula or breast milk and sleeping; these are activities that parents directly engage in to support their fragile infants so that they can grow in good health at home.Therefore, parents had high levels of concern about the GI system related to feeding, in which the rate of parental involvement is high, even when their premature babies did not have any specific GI disease and had an older gestational age.It has been said that most of the reasons why late preterm infants are often readmitted within the first 7 to 10 days after discharge are problems related to feeding [27,28].This could result in in-creased GI concerns during the post-discharge period.Under these circumstances, parents are mainly interested in factors related to infant nutritional status and follow-up care, with the expectation that their premature infants will develop normally after discharge [9].
Therefore, health care providers will be able to proactively provide psychological support, education, and intervention by offering GI-specific information and support during outpatient care of preterm infants, knowing that parents have concerns about the GI system after discharge.In addition, it would be valuable to conduct a clinical study to confirm the results of intervention when educational support regarding the development of the digestive system in premature infants and the prevention of digestive issues is prospectively provided to parents during neonatal transitional care in discharge education given the retrospective evaluation results of this study.After the most common type of concerns (regarding the respiratory system) was excluded, the next-most common concern varied for parents of premature infants at the time of discharge from the NICU.This result was significantly dependent on the gestational age at birth and the specific diseases diagnosed.This could be the result of the emotional changes that occur in mothers as their focus shifts from survival to caring for their physically frail infants at home after discharge [9].Knowledge that the focus of the parents of premature infants changes over time, especially after NICU discharge, may be helpful when supporting parents with concerns after discharge and may improve outpatient follow-up care of premature infants.The long-term relief of parents' emotional and medical concerns might be achieved through additional research and analysis of changes in parents' concerns and identifying factors that affect these changes at milestones during the development of their premature infants.
In conclusion, proactive strategies are necessary to promote the growth and development of preterm infants and to meet parental healthcare needs for premature infants after NICU discharge.The level of preparation for the transition to home of a premature baby can impact parent and child sur- tained and analyzed through the contents of outpatient medical records for premature infants.This retrospective study investigated the concerns of the mothers and fathers of premature infants through medical examinations during outpatient visits.The reporting of this study was based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines [17].
Figure 1 .
Figure 1.Parents' health concerns about the health problems of their premature infants according to time (*p<.05).
vival and fitness.Healthcare professionals must provide education on respiratory and GI care during the transitional period during hospitalization before discharge because the growth and development trajectory of premature babies can be markedly different from those of a term infant and tailored support may be needed[29].It is necessary to offer tailored education to parents based on the specific characteristics of their preterm infant, considering factors including geswww.e-chnr.orgtational age, birth weight, and CNS conditions.However, research on the supportive care needs and interventions for parents of preterm infants is insufficient in South Korea[15].Therefore, we suggest the longitudinal assessment of specific needs based on the concerns of parents of premature infants from before birth to post-discharge, and the development and implementation of supportive interventions to meet those needs during each phase.This study had several limitations.First, because this study only included participants from a single NICU and of a single ethnic group and because the distributions of gestational age and birth weight were not even among study participants, the study findings are not representative of the entire population.Second, memory bias might have influenced the results because parents of premature infants were asked to recall information about their health concerns.Finally, the number of participants was small; therefore, further largescale studies are necessary.CONCLUSIONAmong parents of premature infants, the most common focus of medical concerns in the period from birth to 1 week after NICU discharge was the respiratory system.The second-most common focus of medical concerns after NICU discharge changed based on the gestational age and diseases of the premature infants.This information may help to increase the understanding of the health concerns of parents about their infants from before birth to after NICU discharge and how these concerns may change over time.Therefore, it is necessary to develop and implement supportive interventions to reduce parental anxiety when caring for their premature infants.In addition, it is essential to offer multidisciplinary interventions involving doctors, nurses, development experts, and others, to support the individual growth and development of preterm infants and to assist in managing complications.
Table 3 .
Concerns of Parents about the GI System Compared to Those about Other Organ Systems at 1 Week after Hospital Discharge (N=176) | 2024-05-04T15:41:00.002Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "1f4db14b1182bd7131ba4dc199689de7e232ef12",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-chnr.org/upload/pdf/chnr-2024-007.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d7212d9209709853344778a1b5bb03f9bcf3e28b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219302943 | pes2o/s2orc | v3-fos-license | A Fully Expanded Dependency Treebank for Telugu
Treebanks are an essential resource for syntactic parsing. The available Paninian dependency treebank(s) for Telugu is annotated only with inter-chunk dependency relations and not all words of a sentence are part of the parse tree. In this paper, we automatically annotate the intra-chunk dependencies in the treebank using a Shift-Reduce parser based on Context Free Grammar rules for Telugu chunks. We also propose a few additional intra-chunk dependency relations for Telugu apart from the ones used in Hindi treebank. Annotating intra-chunk dependencies finally provides a complete parse tree for every sentence in the treebank. Having a fully expanded treebank is crucial for developing end to end parsers which produce complete trees. We present a fully expanded dependency treebank for Telugu consisting of 3220 sentences. In this paper, we also convert the treebank annotated with Anncorra part-of-speech tagset to the latest BIS tagset. The BIS tagset is a hierarchical tagset adopted as a unified part-of-speech standard across all Indian Languages. The final treebank is made publicly available.
Introduction
Treebanks play a crucial role in developing parsers as well as investigating other linguistic phenomena. Which is why there has been a targeted effort to create treebanks in several languages. Some such notable efforts include the Penn treebank (Marcus et al., 1993), the Prague Dependency treebank (Hajičová, 1998). A treebank is annotated with a grammar. The grammars used for annotating treebanks can be broadly categorized into two types, Context Free Grammars and dependency grammars. A Context Free Grammar consists of a set of rules that determine how the words and symbols of a language can be grouped together and a lexicon consisting of words and symbols. Dependency grammars on the other hand model the syntactic relationship between the words of a sentence directly using headdependent relations. Dependency grammars are useful in modeling free word order languages. Indian languages are primarily free word order languages. There are few different dependency formalisms that have been developed for different languages. In recent years, Universal dependencies (Nivre et al., 2016) have been developed to arrive at a common dependency formalism for all languages. Paninian dependency grammar (Bharati et al., 1995) is specifically developed for Indian languages which are morphologically rich and free word order languages. Case markers and postpositions play crucial roles in these languages and word order is considered only at a surface level when required. Most Indian languages are also low resource languages. ICON-2009 and 2010 tools contests made available the initial dependency treebanks for Hindi, Telugu and Bangla. These treebanks are small in size and are annotated using the Paninian dependency grammar. Further efforts are being taken to build dependency annotated treebanks for Indian languages. Hindi and Urdu multi-layered and multirepresentational (Bhatt et al., 2009) (Bharati et al., 2007). Each sentence is annotated at word level with part of speech tags, at morphological level with root, gender, number, person, TAM, vibhakti and case features and the dependency relations are annotated at a chunk level. The dependency relations within a chunk are left unannotated. Intra-chunk dependency annotation has been done on Hindi (Kosaraju et al., 2012) and Urdu (Bhat, 2017) treebanks previously. Annotating intra-chunk dependencies leads to a complete parse tree for every sentence in the treebank. Having completely annotated parse trees is essential for building robust end to end dependency parsers or making the treebanks available in CoNLL (Buchholz and Marsi, 2006) format and thereby making use of readily available parsers. In this paper, we extend one of those approaches for the Telugu treebank to annotate intra-chunk dependency relations. Telugu is a highly inflected morphologically rich language and has a few constructions like classifiers etc that do not occur in Hindi which makes the expansion task challenging. The fully expanded Telugu treebank is made publicly available 1 . The part-of-speech and chunk annotation of the Telugu treebank is done following the Anncorra (Bharati et al., 2009b) tagset developed for Indian languages. In the recent years, there has been a co-ordinated effort to develop a Unified Parts-of-Speech (POS) Standard that can be adopted across all Indian Languages. This tagset is commonly referred to as the BIS 2 (Bureau of Indian standards) tagset. All the latest annotation of part of speech tagging of Indian languages is done using the BIS tagset. In this paper, we convert the existing Telugu treebank from Anncorra to BIS standard. BIS tagset is a fine grained hierarchical tagset and many Anncorra tags diverge into finer grained BIS categories. This makes the conversion task challenging. The rest of the paper is organised as follows. In section 2, we describe the Telugu Dependency Treebank, section 3 describes the part of speech conversion from Anncorra to BIS standard, section 4 describes the intra-chunk dependency relations annotation for the Telugu and we conclude the paper in section 5.
Telugu Treebank
An initial Telugu treebank consisting of around 1600 sentences is made available in ICON 2009 tools contest. This treebank is combined with HCU Telugu treebank containing approximately 2000 sentences similarly annotated and another 200 sentences annotated at IIIT Hyderabad. We clean up the treebank by removing sentences with wrong format or incomplete parse trees etc. The final treebank consists of 3220 sentences. Details about the treebank are listed in Table 1.
No. of sentences 3222 Avg. sent length 5.5 words Avg. no of chunks in sent 4.2 Avg. length of a chunk 1.3 words Table 1: Telugu treebank stats The treebank is annotated using Paninian dependency grammar (Bharati et al., 1995). The paninian dependency relations are created around the notion of karakas, various participants in an action. These dependency relations are syntacto-semantic in nature. There are 40 different dependency labels specified in the panianian dependency grammar. These relations are hierarchical and certain relations can be under-specified in cases where a finer analysis is not required or when in certain cases the decision making is more difficult for the annotators (Bharati et al., 2009b). Begum et al. (2008) describe the guidelines for annotating dependency relations for Indian languages using paninian dependencies. The treebank is annotated with part-of-speech tags and morphological information like root, gender, number, person, TAM, vibhakti or case markers etc at word level. The dependency relations are annotated at chunk level. The treebank is made available in SSF format (Bharati et al., 2007). An example is shown in Figure 1. The dependency tree for the sentence is shown in Figure 2.
In the example sentence, the intra-chunk dependencies, i.e dependency labels for cAlA (many) and I (this) are not annotated. Only the chunk heads, xeSAllo (countries-in) and parisWiwi (situation) are annotated as the children of lexu (is-not-there).
The dependency treebanks are manually annotated and it is a time consuming process. In AnnCorra formalism for Indian languages, a chunk is defined as a minimal, non recursive phrase consisting of correlated, inseparable words or entities (Bharati et al., 2009a). Since the dependencies within a chunk can be easily and accurately identified based on a few rules specific to a language, these dependencies have not been annotated in the initial phase. But inter-chunk annotation alone does not provide a fully constructed parse tree for the sentence. Hence it is important to determine and annotate intra-chunk relations accurately.
In this paper, we expand the Telugu treebank by annotating the intra-chunk dependency relations.
Part-of-Speech Conversion
The newly annotated 200 sentences in the treebank are annotated with the BIS tagset while the rest are annotated using Anncorra tagset. We convert the sentences with Anncorra POS tags to BIS tags so that the treebank is uniformly annotated and adheres to the latest standards.
Anncorra tagset Bharati et al. (2009a) propose the POS standard for annotating Indian Languages. This standard has been developed as part of the guidelines for annotating corpora in Indian Languages for the Indian Language Machine Translation (ILMT) project and is commonly referred to as Anncorra POS tagset. The tagset consists of a total of 26 tags.
BIS tagset
The BIS (Bureau of Indian standards) tagset is a unified POS Standard in Indian Languages developed to standardize the POS tagging of all the Indian Languages. This tagset is hierarchical and at the top most level consists of 11 POS categories. Most of these categories are further divided into several fine-grained POS tags. The annotators can choose the level of coarseness required. They can use the highest level tags for a coarse grained tagset or go deeper down the hierarchy for more fine-grained tags. The fine-grained tags automatically contain the information of the parent tags. For example, the tag V VM VF specifies that the word is a verb (V), a main verb(V VM) and a finite main verb (V VM VF).
Converting Anncorra to BIS
For most tags present in the the Anncorra tagset, there is a direct one on one mapping to a BIS tag. However, there are a few tags in Anncorra which diverge in to many finegrained BIS categories. Those tags are shown in Table 2.
It should be noted that one to many mapping exists only with fine grained tags. There is still a one to one mapping between the Anncorra tag and the corresponding parent BIS tag in all cases except question words. During conversion, we aim to annotate with the most fine grained BIS tag. When the fine-grained tag cannot be determined we go the parent tag. We use a tagset converter that maps various tags in Anncorra schema to the tags in BIS schema. In case of tags having multiple possibilities, a list based approach is used. Most Anncorra tags diverging into fine grained BIS tags are for function words which are limited in number. Separate lists consisting of words belonging to fine grained BIS categories are created. A word is annotated with fine grained BIS tag if it is present in the corresponding tag word list, otherwise it is annotated with the parent tag.
Anncorra
Pronouns One of the main distinctions between the two tagsets is in the annotation of pronouns. In Anncorra, all pronouns are annotated with a single tag, PRP. BIS schema contains separate tags for annotating personal (PR PRP) pronouns, reflexive (PR PRF), relative (PR PRL), reciprocal (PR PRC) pronouns and question words (PR PRQ). Pronouns in a language are generally limited in number. In Telugu however, pronouns can be inflected with case markers and there can be a huge number of them. When a pronoun is not found in any word list it is annotated with the parent tag PR.
Demonstratives In Anncorra, there is a single tag for annotating demonstratives where as BIS tagset distinguishes between diectic, relative and question-word demonstratives. Demonstratives are limited in number and the same list based approach used for pronouns is applied here.
Symbols Symbols are separated into symbols and punctuations.
Question words They are separated into pronoun question words and demonstrative question words in BIS tagset. Demonstrative question words are always followed by a noun. While resolving question words (WQ), if the word is followed by a noun it is marked as DM DMQ, else it is marked as PR PRQ.
Verbs Another distinction between the two tagsets lies in the annotation of verb finiteness. In Anncorra, it is annotated only at chunk level. In BIS schema, the finiteness can be annotated at word level. While resolving Verbs (V VM), we look at the verb chunk. There is a one to one mapping between Anncorra chunk types and the fine-grained BIS verb categories.
Annotating Intra-chunk Dependencies
The intra-chunk annotation in SSF format for the sentence in Figure 1 is shown in Figure 4 and the fully expanded dependency tree is shown in Figure 3. It can be seen that, in this case, unlike in Figure 2, cAlA (many) is attached to its chunk head, xeSAllo (countries-in) and I (this) is attached its chunk head parisWiwi (situation). The parse tree for the sentence is now complete. Complete parse trees are useful for creating end to end parsers which do not require intermediate pipeline tools like POS taggers, morphological analyzers and shallow parsers. This is a huge advantage, especially for low resource languages like Telugu. Kosaraju et al. (2012) first proposed the guidelines for annotating intra-chunk dependency relations in SSF format for Hindi. They propose a total of 12 intra-chunk dependency labels mentioned in Table 2. lwg refers to local word group and pof refers to part of. They also propose two approaches, one rule based and another statistical for automatically annotating intra-chunk dependencies in Hindi. In the rule based approach several rules are created constrained upon the POS, chunk name or type and the position of the chunk head with respect to the child node. The intra-chunk dependencies are marked based on these rules. In the statistical approach Malt Parser (Nivre et al., 2006) is used to identify the intrachunk dependencies. A model is trained on a few manually annotated chunks with Malt parser and the same model is used to predict the intra-chunk dependencies for the rest of the treebank.
nmod adj adjectives modifying nouns or pronouns lwg psp post-positions lwg neg negation lwg vaux verb auxiliaries lwg rp particles lwg uh interjection lwg cont continuation pof redup reduplication pof cn compound nouns pof cv compound verbs jjmod intf adjectival intensifier rsym symbols Table 3: Intra-chunk dependencies proposed for Hindi Bhat (2017) propose a different approach for annotating intra-chunk dependencies for Hindi and Urdu by combining both rule based and statistical approaches. Instead of a completely rule based system, they create a Context Free Grammar(CFG) for identifying intra-chunk dependencies. The dependencies within a chunk are annotated based on the CFG using a shift reduce parser.
Intra-chunk dependency annotation for Telugu treebank
In addition to the twelve dependency labels proposed for Hindi, we also introduce a few more labels, nmod, nmod wq, adv and intf for annotating intra-chunk dependencies for Telugu treebank. nmod and adv are already present in the inter-chunk dependency labels (Bharati et al., 2009b).
nmod This dependency relation is used when demonstratives, proper nouns, pronouns and quantifiers modify a noun or pronoun.
intf Intensifiers (RP INTF) can modify both adjectives and adverbs. So we replace the jjmod intf with intf and use the same dependency label when an intensifier modifies an adverb or adjective.
nmod wq This dependency relation is used when question words modify nouns inside a chunk.
adv This dependency relation is used when adverbs modify a verb inside a chunk.
pof cv Compound verbs are combined together in Telugu. So this dependency relation is not seen in Telugu. An example of compound verb is kOsEswAnu. It is a compound of kOsi and vEs-wAnu. In cases like ceyyAlsi vaccindi, vaccindi is annotated as an auxiliary verb.
lwg rp This dependency label is used to annotate particles like gAru, kUdA etc. It is also used for classifiers. Telugu contains classifiers and a commonly used classifier is maMxi. It specifies that the noun following maMxi is human. Sometimes the following noun can be dropped and in those cases maMxi is treated as a noun. Classifiers are categorized under particles. So, maMxi is marked as a child of koVMwa using label lwg rp in the above example.
lwg psp In Telugu most post-positions occur as inflections of content words. But few of them also occur separately. The ones occurring separately are marked as lwg psp. Sometimes, spatio-temporal nouns (N NST) also act as post-positions when occurring alongside nouns. In these cases, they are annotated as lwg psp.
In this paper, we follow the approach proposed by Bhat (2017) that makes use of a Context Free Grammar (CFG) and a shift-reduce parser for automatically annotating intrachunk dependencies. We use the treebank expander code made available by Bhat (2017) 3 and write the Context Free Grammar for Telugu. The Context Free Grammar is generated using the POS tags and creates a mapping between head and child POS tags and dependency labels. The intra-chunk annotation is done using a shift-reduce parser which internally uses the Arc-Standard (Nivre, 2004) transition system. The parser predicts a sequence of transitions starting from an initial configuration to a terminal configuration, and annotate the chunk dependencies in the process. A configuration consists of a stack, a buffer, and a set of dependency arcs. In the initial configuration, the stack is empty, buffer contains all the words in the chunk and intra-chunk dependencies are empty. In the terminal configuration, buffer is empty and stack contains only one element, the chunk head, and the chunk sub-tree is given by the set of dependency arcs. The next transition is predicted based on the Context Free Grammar and the current configuration.
Results
We evaluate intra-chunk dependency relations annotated by the parser for 106 sentences. The test set evaluation results are shown in Table 4.
Almost all of the wrongly annotated chunks are because of POS errors or chunk boundary errors. Since the Context Free Grammar rules are written using POS tags, errors in annotation of POS tags automatically lead to errors in intrachunk dependency annotation. The dependency relations are annotated within the chunk boundaries. So any errors in the chunk boundary identification also lead to errors in intra-chunk dependency annotation. Telugu is an agglutinative language and the chunk size rarely exceeds three words. The CFG grammar based approach works accurately provided there are no errors in POS or chunk annotation.
Conclusion
In this paper, we automatically annotate the Telugu dependency treebank with intra-chunk dependency relations thus finally providing complete parse trees for every sentence in the treebank. We also convert the Telugu treebank from AnnCorra part-of-speech tagset to the latest BIS tagset. We make the fully expanded Telugu treebank publicly available to facilitate further research. | 2020-06-04T22:53:43.999Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "8b2fb07e9fecb724d011ff6a7a5400771623ea47",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "8b2fb07e9fecb724d011ff6a7a5400771623ea47",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
37939138 | pes2o/s2orc | v3-fos-license | SITAGLIPTIN IMPAIRS HEALING OF EXPERIMENTALLY INDUCED GASTRIC ULCERS VIA INHIBITION OF INOS AND COX-2 EXPRESSION
Gastric ulcer healing is a complex process that is regulated by several promoting factors including CO X-2 and iNOS. Diabetes mellitus is usually associated w ith delayed gastric ulcer healing. Hence, the curre nt study was designed to investigate the effect of sit agliptin (dipeptidyl peptidase-4 inhibitor) on gast ric ulcer healing and expression of iNOS and COX-2 in rat sto mach.The study was conducted on 30 rats divided int o three equal groups. Group 1 served as normal contro l group. Gastric ulcer was induced, by serosal application of acetic acid, in group 2 (ulcer model group) and group 3 (sitagliptin-treated group). Si tagl ptin was administrated from day 3 to day 10 in group 3. All rats were sacrificed on day 10 and stomachs wer e removed for pathological examination and immunohist ochemical assessment of COX-2 and iNOS expression. Pathological examination revealed that gas ric ulcer healing was significantly impaired in the sitagliptin-treated group as evidenced by the signi ficantly larger ulcerated area and ulcer base matur ation impairment.COX-2 and iNOS expression as well as mea n MVD were significantly diminished in the sitagliptin-treated group as compared to the ulcer model group. A significant positive correlation was found between COX-2 and iNOS implying their synergistic a ction. We conclude that sitagliptin significantly impairs gastric ulcer healing in rats possibly thro ugh inhibition of iNOS and COX-2 expression. Our re sults raise the question of whether sitagliptin is advisa ble in diabetic patients with pre-existing gastric ul er. Our preliminary experimental findings need to be substa ntiated by future human studies.
INTRODUCTION
Gastric ulcer is considered one of the most prevalent gastrointestinal disorders. Its clinical outcome is determined by its liability to heal in order to prevent further damage to the gastric mucosa (Dharmani et al., 2003;Xie et al., 2013).
Gastric ulcer healing is a dynamic process encompassing epithelial regeneration, angiogenesis and maturation of the base (reduction of the ulcer base size) and is regulated by multiple factors Sato et al., 2013). COX-2 (cyclooxygenase-2) and iNOS (inducible nitric oxide synthase) are among the most important healing-promoting factors for gastric ulcer (Shigeta et al., 1998;Tatemichi et al., 2003;Chatterjee et al., 2013). COX-2 induces the synthesis of Prostaglandins (PGs) that have stimulatory effects on ulcer healing (Takahashi et al., 1998a). iNOS-derived Nitric Oxide (NO) contributes to gastric ulcer healing through maintenance of an increased blood flow at the ulcer margin and stimulation of angiogenesis in the ulcer base as well as inhibition of inflammatory neutrophil accumulation via downregulation of surface expression of adhesion molecules (Salzman et al., 1998;Konturek et al., 1993). Recently, it was shown that the iNOS-based Science Publications AJPT inflammatory pathway cross-link with the more wellknown COX-2 pathway. This synergistic molecular interaction between the two inflammatory systems may cast more light on their healing promoting effects on gastric ulcer (Kim et al., 2005).
Diabetic patients are more vulnerable to develop gastric ulcers as diabetes leads to impairment of the antioxidant defense system of the gastric mucosa (Owu et al., 2012;Konturek et al., 2010). In addition, diabetic patients with gastric ulcers may suffer from reduced perception of the typical gastrointestinal symptoms due to diabetic neuropathy and they are at increased risk of bleeding (Boehme et al., 2007). Furthermore, diabetes may be associated with delayed healing of gastric ulcer due to significant decrease in the gastric microcirculation possibly resulting from reduction in mucosal prostaglandins (Brzozowska et al., 2004). Moreover, it was reported that hyperglycemia together with the increased production of proinflammatory cytokines result in sustained inflammatory reaction and thus may be responsible for the delay of healing at the ulcer area (Cosentino et al., 2003). Such previously stated reports necessitate studying the effect of antidiabetic drugs on gastric ulcer healing.
Dipeptidyl Peptidase-4(DPP-4) inhibitors are recently introduced drugs used for treatment of type 2 diabetes. Recent studies demonstrated that DPP-4 inhibitors or related compounds may possess marked inflammatory modifying effects through modulation of cytokine production (Alonso et al., 2012).
To the best of our knowledge, there have been no studies in the literature investigating the effect of DPP-4 inhibitors on gastric ulcer healing. Accordingly, the purpose of this research was to explore the effect of oral administration of sitagliptin (DPP-4 inhibitor) on the healing process of experimentally induced gastric ulcer in rats. In addition, the relation between sitagliptin and expression of healing-promoting factors (COX-2 and iNOS) was also investigated.
Experimental Animals
All experiments were performed in accordance with national animal care guidelines and were preapproved by the Ethics Committee at Faculty of Medicine, Alexandria University.
The present study was conducted on 30 male Wistar albino rats weighing from 150 to 200 g. The rats were obtained from the Animal House at the Faculty of Medicine, Alexandria University. They were housed under optimal laboratory conditions (relative humidity 85±2%, temperature 22±1°C and 12 h light and 12 h dark cycle). All through the study, rats were fed on standard commercial pellet diet and had free access to drinking water.
Animal Grouping
Rats were divided into 3 groups of 10 rats each: Group 1: (normal control group) in which rats had free access to drinking water without any additive. Group 2: (gastric ulcer model) in which gastric ulcer was induced in rats and they had free access to drinking water without any additive. Group 3: (sitagliptin-treated group) in which rats received sitagliptin added to the drinking water ,at a dose of 30 mg/kg orally every day, beginning on day 3 and continuing for 7 days following gastric ulcer induction. The dose of 30 mg/kg/d is considerably higher than the human dose because sitagliptin has a half-life of two hours in rats (Beconi et al., 2007) versus 13 h in humans (Dhillon, 2010). This short half-life necessitated continuous administration through drinking water instead of the once-a-day dosing used in humans (Chen et al., 2011). The Institutional Animal Care and Use Committee (IACUC) protocol of Boston University-USA for adding a novel compound to the drinking water was followed in order to ensure that each rat received the exact dose in the drinking water (IACUC, 2013).
Induction of Gastric Ulcer
After fasting for 18 h, rats were anesthetized, using halothane and gastric ulcers were induced by application of 0.2 mL of acetic acid (100%) to the serosal surface for 60 sec as described by Okabe and Amagase (2005). This model of gastric ulcer was chosen as it highly resembles human ulcers in terms of both pathological features and healing process.
Ten days following gastric ulcer induction, rats were sacrificed by an overdose of intraperitoneally injected sodium pentobarbital. The stomachs were removed, opened along the greater curvature and rinsed with saline then they were fixed in 10% buffered formalin.
Pathological Assessment of Ulcer Healing
The stomachs were grossly examined for pathological changes. The ulcerated area (mm 2 ) was quantified using the following equation: where, S represented the ulcerated area (mm 2 ), d1 and d2 the longest longitudinal and transverse diameters of the ulcer (Kang et al., 2010).
AJPT
Representative sections were routinely processed. 5 µm-thick sections were cut and stained with the conventional Haematoxylin and Eosin (H&E) stain and examined by the light microscope for histopathological assessment. Masson trichrome stain was used to highlight fibrosis. The degree of inflammation, degeneration and thickness (maturation) of ulcer base were semi-quantitatively assessed at the ulcer bed. Length of regenerated mucosa (mm) was also measured.
Immunohistochemistry for iNOS and COX-2
The deparaffinized tissue sections were rehydrated in graded alcohols. Immunohistochemical staining was performed using an avidin-biotinylated immunoperoxidase methodology. The endogenous peroxidase activity was quenched by using hydrogen peroxide 3% for 10 min. For antigen retrieval, sections were microwaved in 10mM citrate buffer (pH 6.0). Prediluted primary antibodies, COX-2 (clone SP21, rabbit monoclonal antibody) and iNOS (rabbit polyclonal antibody) were used. The bound antibodies were detected by the UltraVision Detection System Anti-Polyvalent, HRP/DAB (Ready-To-Use). Positive and negative controls were included in all runs.
Primary antibodies and detection system were purchased from Lab Vision Corporation, Thermo Fisher Scientific Inc., USA.
Computerized Image Analysis (CIA)
Quantitative estimation of the total area of positive reaction was done on histological sections immunostained for iNOS and COX-2 using image analyzer software (Digimizer ® Version 4.1, MedCalc Software, Belgium).
Binary images for measurement were generated and the mean total area of positive reaction was calculated.
Assessment of Microvessel Density (MVD)
Sections were immunostained by the vascular marker, CD31 (rabbit polyclonal antibody) as described above. (Fig. 1d) MVD was then calculated as previously described (Dai et al., 2005).
Statistical Analysis
Data were analyzed using Statistical Package for Social Science (SPSS® Statistics 20). The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test. Quantitative normally distributed variables were described using mean and standard deviation. Independent t-test was used to compare their means. Both quantitative abnormally distributed and Qualitative ordinal variables were described using median, minimum and maximum. Correlations were tested using Spearman's correlation coefficient. Mann-Whitney (U) test was used to compare their distributions. Statistical Significance was judged at the 5% level (p≤ 0.05).
MVD was significantly higher in the ulcer model group compared to the normal control group (p<0.001). In addition, the length of regenerated mucosa (mm) was significantly positively correlated with MVD (ρ = 0.532, p = 0.003).
Induction of Gastric Ulcer Significantly Induced COX-2 and iNOS Expression
COX-2 and iNOS expression were induced in the stomachs of ulcer model group ( Fig. 2a and b) with a statistically significant higher expression compared to the normal control group that lacked their expression (t = 6.90, p<0.001 and t = 5.79, p<0.001 respectively). COX-2 and iNOS were most intensely expressed in inflammatory cells at the ulcer base ( Fig. 2a and b).
Sitagliptin Impaired Gastric Ulcer Healing and Significantly Inhibited COX-2 and iNOS Expression and Diminished MVD
Seven days treatment with sitagliptin in group 3 resulted in pathologically proven significant impairment of gastric ulcer healing (Fig. 1c) as compared to the ulcer model group (Fig. 1b). The ulcerated area in the sitagliptin-treated group was significantly larger (nearly 9 times wider) than the model group (U = 2.50, p = 0.001) (Table 1 and Fig. 3a).
The expression of COX-2 and iNOS in the sitagliptin-treated group (Fig. 2c and d) was more pronounced at the ulcer margins with less intense expression in inflammatory cells at the ulcer base.
DISCUSSION
Gastric ulcer refers to a disruption of the mucosal integrity of the stomach with local excavation due to active inflammation (Valle, 2002). iNOS and COX-2 represent important lines of defense necessary for maintenance of mucosal integrity and are important factors in ulcer healing processes especially angiogenesis, base maturation and modulation of inflammatory reactions (Takahashi et al., 2001;Stachura et al., 1995;Akiba et al., 1998;Dharmani et al., 2003).
In addition, iNOS-derived NO and COX-2 derived PGs have been reported to have an impact on a wide variety of cell types and processes that may be active during inflammatory responses, including leucocyte adhesion and microvascular responsiveness hence they play important roles in gastric ulcer healing (Mizuno et al., 1997;Hickey, 2001).
Moreover Allen et al. (1988) proposed that NO plays an important role in ulcer healing by forming a gelatinous coat covering the ulcer bed, consisting of a fibrin-based gel with mucus and necrotic cells, which acts as a protective barrier preventing direct contact with the gastric luminal contents. Furthermore, Wallace (2008) highlighted the fact that the protective functions of PGs in the stomach can be carried out by other mediators, in particular NO.
COX-2 and iNOS are normally undetectable in most normal tissues, their expression being induced only at inflammatory sites (Mitchell et al., 1995;Okazaki et al., 2007). The results of the present study are in accordance with that finding, as normal control stomach tissues lacked expression of both markers.
On the other hand, significant expression of COX-2 and iNOS was detected in ulcer bed in the model group when rats were sacrificed 10 days after gastric ulcer induction. In agreement with our study, Tatemichi et al. (2003) and Shigeta et al. (1998) stated that iNOS and COX-2 expression peaked during the rapid healing phase and were limited to ulcer bed. According to Halter et al. (1995) four healing phases are recognized in experimental models of gastric ulcer: an early lag phase (days 1-3), a rapid healing phase (days 3-14), a late lag phase (days 14-18) and a remodeling phase (day 18 and onward). In the ulcer model group in our study, COX-2 and iNOS expression was mainly encountered in inflammatory cells at ulcer bed. Similarly, Shigeta et al. (1998) reported that strong COX-2 immunoreactivity was found in macrophages/monocytes, granulocytes and fibroblasts at ulcer bed. Also, Tatemichi et al. (2003) demonstrated that iNOS-positive cells were localized only among the inflammatory cells and fibroblasts at ulcer bed.
Angiogenesis is an another important factor that play a pivotal role in gastric ulcer healing since the neovasculature promotes nutrient supply to the healing tissue (Takahashi et al., 1998b). In the present study, MVD (one of most commonly used techniques to quantify angiogenesis according to (Kang et al., 2010) was significantly increased in the model group and was significantly positively correlated with the length of regenerated mucosa. In addition, a positive correlation was detected between iNOS and COX-2 expression on one hand and MVD on the other hand. Such findings suggest that iNOS and COX-2 may contribute to ulcer healing process through regulation of angiogenesis. This was further supported by Konturek et al. (1993) who reported that NO stimulates angiogenesis in the ulcer base, contributing to gastric ulcer healing. Also, Leahy et al. (2000) stated that COX-2-derived PGs have similar angiogenic stimulating effects.
In the current study, a statistically significant positive correlation between COX-2 and iNOS expression was detected. Such finding further supports the recent identification of a synergistic molecular interaction between COX-2 and iNOS pathways proving that these two systems are related and may represent a major mechanism in inflammatory responses (Kim et al., 2005;Fang et al., 2000;Kornau et al., 1995).
As diabetes is associated with delayed ulcer healing the present study examined the effect of one of the recently introduced oral antidiabetic drugs sitaglitpin (DPP-4 inhibitor) on healing process of gastric ulcer. DPP-4 is a serine protease that is widely distributed throughout the body, expressed as an ectoenzyme on endothelial cells, on the surface of T-lymphocytes and in a circulating form. Although there are many potential substrates for this enzyme, it seems to be especially critical for the inactivation of incretin hormones: GLP-1 (glucagon like peptide -1) and Gastric Inhibitory Peptide (GIP) (Baggio and Drucker, 2007).
In our study, ulcer healing was significantly impaired in the sitagliptin-treated group. Compared to the ulcer model group, the ulcerated area in the sitagliptin-treated group was significantly larger and maturation of ulcer base was significantly impaired. In addition, inflammatory changes were severer and mucosal regeneration was less pronounced in the sitagliptin-treated group compared to the ulcer model group, however, these results did not reach statistical significance.
AJPT
Expression of COX-2, iNOS and MVD in our study were significantly diminished in the sitagliptin-treated group compared to the ulcer model group. This was further substantiated by our finding of a significant negative correlation between the mean ulcerated area on one hand and COX-2 expression, iNOS expression and MVD on the other hand. In addition, the intensity of inflammatory changes and thickness (maturation) of ulcer base in our study were significantly negatively correlated with COX-2 and iNOS expression.
Such results suggest that sitagliptin acts as inhibitor of both COX-2 and iNOS leading to impairment of ulcer healing processes specially angiogenesis. This is in accordance with (Tatemichi et al., 2003;Shigeta et al., 1998) who reported that administration of COX-2 and iNOS inhibitors resulted in significant prevention of mucosal regeneration and maturation of the ulcer base as well as regression of angiogenesis in the examined rat stomachs.
In the sitagliptin-treated group in our study, COX-2 and iNOS were mostly expressed at the ulcer margins with less intense expression at the ulcer base which probably has a deleterious effect on ulcer healing. This is in accordance with Tarnawski et al. (1995) who reported that iNOS were to act detrimentally on ulcer healing if it is expressed at the ulcer margin which is an important area for ulcer healing, supplying new epithelial cells (regenerating zone).
Few reports have investigated the effect of sitagliptin administration on iNOS expressions in various tissues. Nader et al. (2012) have shown that NO content as well as the mRNA expression of iNOS was remarkably decreased by sitagliptin treatment in murine model of allergic airway disease.
Other studies investigated the role of incretins and incretin mimetics on iNOS expression. Salehi et al. (2008) reported that GLP-1 suppressed excessive NO generation and iNOS activity in diabetic rat islets via the activation of cAMP/PKA system. Also, (Belin et al., 1999;Jimenez-Feltstrom et al., 2005) demonstrated that GLP-1 reduced NO production through increasing the level of cAMP in high glucose-and IL-1β-stimulated islets respectively. In addition, Kang et al., 2009) showed that exenatide (GLP-1 agonist) decreased cytokineinduced iNOS protein expression.
On the other hand, Ye et al. (2010) have shown that sitagliptin had no effect on COX-2 activity in experimentally induced myocardial infarction in rats.
CONCLUSION
The findings of our study together with previous reports show that the acetic acid-induced gastric ulcer model serves as an excellent model for the study of gastric ulcer development and healing. In addition, it provides further evidence on the synergistic actions of COX-2 and iNOS and the fact that they contribute to gastric ulcer healing possibly through stimulation of angiogenesis and modulation of inflammatory responses.
Sitagliptin was found to significantly impair gastric ulcer healing in rats, possibly through inhibition of iNOS and COX-2 expression. Thus further studies are needed to justify its prescription to diabetic patients with preexisting gastric ulcer.
The cellular mechanism by which sitagliptin inhibits iNOS and COX-2 expression in the ulcerated gastric mucosa remains to be elucidated. | 2019-01-23T00:39:39.116Z | 2013-09-04T00:00:00.000 | {
"year": 2013,
"sha1": "25019264d730b4672ead22ea968294f28aca8733",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajptsp.2013.107.119",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "436463359098529d676253a99a1751d73c1983c4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252199956 | pes2o/s2orc | v3-fos-license | An Evaluation of Low Overhead Time Series Preprocessing Techniques for Downstream Machine Learning
In this paper we address the application of pre-processing techniques to multi-channel time series data with varying lengths, which we refer to as the alignment problem, for downstream machine learning. The misalignment of multi-channel time series data may occur for a variety of reasons, such as missing data, varying sampling rates, or inconsistent collection times. We consider multi-channel time series data collected from the MIT SuperCloud High Performance Computing (HPC) center, where different job start times and varying run times of HPC jobs result in misaligned data. This misalignment makes it challenging to build AI/ML approaches for tasks such as compute workload classification. Building on previous supervised classification work with the MIT SuperCloud Dataset, we address the alignment problem via three broad, low overhead approaches: sampling a fixed subset from a full time series, performing summary statistics on a full time series, and sampling a subset of coefficients from time series mapped to the frequency domain. Our best performing models achieve a classification accuracy greater than 95%, outperforming previous approaches to multi-channel time series classification with the MIT SuperCloud Dataset by 5%. These results indicate our low overhead approaches to solving the alignment problem, in conjunction with standard machine learning techniques, are able to achieve high levels of classification accuracy, and serve as a baseline for future approaches to addressing the alignment problem, such as kernel methods.
I. INTRODUCTION
Time series are ubiquitous in many domains such as medicine, speech, finance, control systems, and computing to name a few. A relatively new area of multi-modal timeseries is the modern cloud or High Performance Computing Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. § Corresponding author. Email : mlweiss@ll.mit.edu (HPC) system, where time series data can arise from a large variety of sources. These sources include highly granular data collected from compute infrastructure such as the CPUs/GPUs, networking, and file system utilization to higher level time series data such as overall cluster utilization as a function of time. These data are an important source of monitoring datacenter operations and can provide actionable insights into overall system health and operating efficiency. Additionally, environmental data from the datacenter such as temperature, humidity, and power consumption are critical for monitoring and operating datacenters from the perspective of energy efficiency and physical factors that can drastically affect system operation. This area of time series analysis is a ripe target for the application of AI/ML approaches for optimization. For example, Google researchers applied machine learning to optimize datacenter cooling [1], [2]. However, neither the data nor models are publicly available.
To enable the development of AI/ML models in this domain, we have released the MIT SuperCloud Dataset [3], which consists of a large collection of time series data collected from an operational supercomputing center. This dataset consists of time series of hardware utilization which includes the CPU and GPU utilization over time, memory usage, file I/O, and other signals monitored on the system. The MIT SuperCloud Dataset is freely available for download via AWS Open Data Registry. Detailed instructions for downloading the data are available at https://dcc.mit.edu/data.
The goal of the MIT SuperCloud Dataset is to foster the development of AI/ML approaches for improving cluster operations. One important application includes the MIT Su-perCloud Workload Classification Challenge [4], which aims to use machine learning to identify compute workloads on the system using time series data collected from all jobs running on the system. This paper is a further contribution to the MIT SuperCloud Workload Classification Challenge, extending the baseline implementations and results presented in [4].
To effectively perform machine learning tasks, such as classification or regression, on time series data certain requirements are necessary of the data. One such requirement is that the feature space of the time series align or are of the same dimension. This is a general principle of machine learning algorithms which require an input matrix where the rows represent the trials or samples and the columns correspond to the feature space. However, in the case of real-world time series, ensuring that data occupy the same dimension or feature space, which we refer to as alignment herein, is often non-trivial. Potential causes of misalignment may include missing data, time series which are collected over different time intervals, or data collected at different sampling rates. In this work, which is part of the larger MIT SuperCloud Datacenter Challenge [3], we build on the results in [4] where we address the problem of classifying non-aligned time series data collected from the MIT SuperCloud High Performance Computing Center [4].
A. Prior Work
A variety of approaches to the time series alignment problem exist, ranging from the padding or interpolating of missing values to the application of state-of-the-art neural networks. While padding and interpolation come with a low computational overhead, they result in all time series having the length of the largest time series. In order not to significantly grow the size of the time series data, other methods based on dynamic time warping [5]- [8] and Support Vector Machines/Kernel methods [9]- [11] have also been suggested. Recent attempts also exist that leverage neural networks [12]- [15], which are able to learn complex patterns from data but at the cost of the high computational complexity required by neural networks.
B. Our Contribution
In this work we avoid both dataset augmentation through padding/interpolation and the higher overhead computations discussed above, while achieving high classification accuracy, by employing low computational overhead approaches to the time series alignment problem. Specifically, • Establish a baseline for improved classification accuracy by sampling N points from each time series. • Leverage low overhead preprocessing techniques, based upon summary statistics and the Fourier Transform, to generate N sample time series which consider the full time series. • Demonstrate these low overhead time series preprocessing techniques achieve upwards of 95% accuracy using well-known machine learning algorithms, an improvement of 5% over previous work based upon sampling N contiguous point in a time series.
II. DATASETS A. MIT SuperCloud Dataset
The experiments in this paper used the MIT SuperCloud Dataset [3], a dataset consisting of over 2TB of data collected from the MIT SuperCloud HPC cluster. Of primary interest to this paper, the MIT SuperCloud Dataset contains multichannel GPU times series data consisting of features such as GPU memory utilization, GPU temperature, and GPU power draw. Of the over 2TB of data in the MIT SuperCloud Dataset, there is approximately 65GB of labelled GPU time series data. This labelled data consists of 3,430 manually labelled deep learning jobs which ran on the MIT SuperCloud HPC. For further details on the MIT SuperCloud Dataset and the labelled subset see [3] and [4].
B. Preprocessing Techniques
As mentioned in Section I, the work herein builds upon [4], extending baseline implementations for workload classification. In [4] the goal was to classify the different deep learning models found in the labelled dataset based upon their GPU time series. The problem of multi-channel time series alignment was addressed in [4] by simply selecting the first, middle, or a random minute of data from each of the 3,430 jobs. In this paper, we extend the results from [4] and introduce low overhead preprocessing techniques which address the alignment problem. The preprocessing techniques we employ which address the alignment problem are broadly: • Select a subset of N contiguous points from each time series • Split each full time series into N windows and perform summary statistics on each window • Select the N largest Fourier Coefficients for each full time series 1) N-point Subset Selection: This is the same technique that was used in [4] to address the alignment problem and is included here as a baseline for comparison with prior work. While in [4] approximately 540 samples were selected from each time series, here we experimented with two sampling sizes: 100 and 1000 samples. Furthermore, as in [4], samples were taken from three different sections of each time series, the first N samples, middle N samples, and a random sampling of N points somewhere in each time series. This resulted in six different N-point subset selection datasets (two values of N and start, middle, and random sections) 2) N-window Summary Statistics: One issue with N-point subset selection is that it does not consider all data in a time series. To mitigate this issue, while addressing the alignment problem, we broke each full time series into N windows, and computed the mean and standard deviation of each window. This resulted in a total of four N-window summary statistics datasets (two values of N and mean and standard deviation summary statistics).
3) N-largest Fourier Coefficients: Our third approach to the alignment problem was to select the N-largest Fourier Coefficients for each time series. In both the discrete and continuous cases of the Fourier Transform [16], [17], the Fourier Coefficients are the components of a function (in our case GPU time series signals) projected onto a orthogonal basis in Hilbert Space, where the basis functions or sequences are complex exponentials [16], [17]. In this sense, selecting the N-largest Fourier Coefficients is analogous to Principal Component Analysis [18]. In total there were two Fourier Coefficient datasets, one for each of the two values of N.
As an example, considering the SVM cluster from Figure 1 N=100 indicates each full time series was broken into 100 windows, with statistics performed on each window. Lastly, in the Fourier Coefficient case, N=100 indicates the dataset consisted of the largest N Fourier Coefficients. While there are 3,430 unique jobs in the labelled dataset, as some jobs requested multiple GPUs the actual number of distinct GPU time series in our datasets was 19,481. Each dataset was split into stratified training and testing sets using the scikit-learn StratifiedShuffleSplit class [19]. As the GPU portion of the labelled dataset consists of multi-channel time series dataset, we used the following sensor data collected for each GPU job: The number of time series samples for each of these seven sensors was 100 or 1000 based on the descriptions above. Thus, taking N=100 as an example, each training dataset were in R 15584×100×7 (15,584 trials, 100 samples per trial, seven sensors per trial). In order to assure all data were dimensionless we applied min-max scaling, based upon the MinMaxScaler class in scikit-learn. These scaling techniques expect a two-dimensional input so we reshaped each dataset by stacking the last two dimensions prior to scaling. Using the example from above, the dataset after reshaping was in R 15584×700 . All the descriptions above also apply to the data in the test set, with the exception that the scaling models were fit only on the training data. In total the two N values and six preprocessing techniques resulted in twelve distinct datasets which were used in our experiments.
A. Machine Learning
Using the datasets described in Section II, we constructed a machine learning pipeline consisting of dimensionality reduction via PCA followed by training a classification estimator on the training data via cross-validation. The cross-validation was done using scikit-learn's GridSearchCV [19] with 5 crossvalidation folds. The grid search parameters for PCA were dimensions of 256 and 512. In total, we trained three estimators (all using scikit-learn): Support Vector Classifier (SVC) [20], Random Forests (RF) [21], [22], and k-Nearest Neighbors (KNN) [23]. The grid search hyperparameters (naming follows the scikit-learn convention) and associated values for each of the estimators were: • SVC: C (200, 300), kernel (rbf) • RF: n estimators (100, 200) • KNN: n neighbors (7,9) The test set was then evaluated on the best model returned by GridSearchCV and this value was used as the final accuracy in our results.
B. Results
Experimental results are shown in Figure 1. The best test set accuracy was achieved by either the mean or standard deviation preprocessing techniques across all experiments, with the Fourier Coefficients a close second. Of interest is the fact our proposed preprocessing techniques outperformed the start, middle, and random techniques in almost all the experiments. While, we see the start experiments depended largely on N, the remaining five techniques were relatively independent of N. Specifically, in all experiments the best performing model achieved an accuracy of 95% or greater using our proposed preprocessing techniques, while the best performing baseline technique was the middle dataset, with a top accuracy of 90%, which was stable across values of N. It should be noted we also ran experiments for N values of 250 and 500, with results similar to those for N values of 100 and 1000. Data and code will be made publicly available via the MIT SuperCloud Datacenter Challenge website : https://dcc.mit.edu.
A. Summary
In this work we addressed the time series alignment problem via three low overhead preprocessing techniques: performing mean and standard deviation summary statistics on N bins and selecting the N largest Fourier Coefficients. These techniques allow the alignment of time series with arbitrary length, without growing the dataset size, all while taking into account the full time series. Additionally, given the low computational overhead of these techniques, and the high accuracy they achieve, these techniques are well suited to time series classification applications where power consumption and computational complexity are operational constraints.
B. Future Work
Given the success of our proposed techniques, we envision two paths of future research on the time series alignment problem. First, we see the investigation of kernel based methods for the classification of misaligned time series data. This is evidenced by the fact modifications to Support Vector Machines involving dynamic time warping has been successful in this domain. Additionally, from the point of view of a High Performance Computing operational environment, the investigation of time series preprocessing techniques to improve early classification of compute jobs is highly desirable. | 2022-09-13T01:16:00.395Z | 2022-09-12T00:00:00.000 | {
"year": 2022,
"sha1": "1ca750da7f69aff5fe2c7c50841f98eca24c8329",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1ca750da7f69aff5fe2c7c50841f98eca24c8329",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252621007 | pes2o/s2orc | v3-fos-license | Targeted delivery of galbanic acid to colon cancer cells by PLGA nanoparticles incorporated into human mesenchymal stem cells
Objective: The aim of this study was to investigate the efficacy of mesenchyme stem cells (MSCs) derived from human adipose tissue (hMSCs) as carriers for delivery of galbanic acid (GBA), a potential anticancer agent, loaded into poly (lactic-co-glycolic acid) (PLGA) nanoparticles (nano-engineered hMSCs) against tumor cells. Materials and Methods: GBA-loaded PLGA nanoparticles (PLGA/GBA) were prepared by single emulsion method and their physicochemical properties were evaluated. Then, PLGA/GBA nanoparticles were incorporated into hMSCs (hMSC/PLGA-GBA) and their migration ability and cytotoxicity against colon cancer cells were investigated. Results: The loading efficiency of PLGA/GBA nanoparticles with average size of 214±30.5 nm into hMSCs, was about 85 and 92% at GBA concentration of 20 and 40 μM, respectively. Nano-engineered hMSCs showed significant higher migration to cancer cells (C26) compared to normal cells (NIH/3T3). Furthermore, nano-engineered hMSCs could effectively induce cell death in C26 cells in comparison with non-engineered hMSCs. Conclusion: hMSCs could be implemented for efficient loading of PLGA/GBA nanoparticles to produce a targeted cellular carrier against cancer cells. Thus, according to minimal toxicity on normal cells, it deserves to be considered as a valuable platform for drug delivery in cancer therapy.
Introduction
Despite outstanding advancement in medical technology, cancer remains one of the leading causes of mortality and morbidity throughout the world. Chemotherapy is one of the commonly used methods for cancer treatment, but some important limitations including drug resistance, failures in the chemotherapy during metastasis, insufficient tumor selectivity and cytotoxic effects on healthy tissues led to development of other strategies for cancer treatment (Charbgoo et al., 2020;Hashemi et al., 2020).
Over the past decades, different herbal products with tremendous chemical diversity have been investigated for their anticancer properties (Huang et al., 2021). Galbanic acid (GBA), isolated from Ferula species (Apiaceae) has been documented to have various promising biological activities including anticancer, cell cycle arrest effects and anti-proliferative activities in different cancer cells (Sajjadi et al., 2019;Shahcheraghi et al., 2021). However, clinical application of GBA is widely limited by low solubility, low permeability in aqueous media and poor bioavailability. So, to overcome these obstacles and improve the pharmacological properties of GBA, different delivery systems have been introduced. Poly (lactic-co-glycolic acid) (PLGA) as the most prevalent nano-polymer drug carrier is a biodegradable and biocompatible polyester which has been approved by the FDA and extensively applied for delivery of different therapeutic agents including drugs, genes, proteins and peptides (Du et al., 2021;Lin et al., 2021).
Recently, Mesenchyme stem cells (MSCs) as an efficient cell-based therapy systems have attracted a great deal of attention for the targeted delivery of anticancer drugs into primary tumors and metastases (Heidari et al., 2020;Hour et al., 2020;Yin et al., 2020). Some clinical advantages such as easy isolation from multiple tissues, low immunogenic properties, fast ex vivo expansion, immunomodulatory functions, damage repair capacity, feasibility of autologous transplantation and ability to be manipulated or genetically modified, qualify MSCs as ideal vehicles for drug/gene delivery (Gao et al., 2013;Krueger et al., 2018). However, anticancer drug cytotoxicity on MSCs and rapid drug efflux remain significant challenges. Incorporating controlled-release nanoparticles (NPs) such as PLGA into MSCs is an alternative delivery approach to conquer these problems (Zhang et al., 2015).
In this study, GBA-loaded PLGA NPs were constructed and then incorporated into MSCs derived from human adipose tissue (hMSC/PLGA-GBA NPs). Furthermore, the migration and cytotoxicity of hMSC/PLGA-GBA NPs against colon cancer cells were investigated.
Preparation of GBA-loaded PLGA NPs
Galbanic acid-loaded PLGA nanoparticles (PLGA/GBA NPs) were prepared using single emulsion solvent evaporation technique (Hafezi Ghahestani et al., 2017). Briefly, PLGA (25 mg) and GBA (1.25 mg) were dissolved in 1 ml aceton:dichloromethane (1:4) and stirred for 15 min. The prepared solution was then added to PVA (5% w/v) as an aqueous phase under sonication on ice (amplitude 80%, for 10 min) using a probe sonicator (Fisons Instruments Ltd., Crawley, UK). The prepared emulsion was added dropwise to 10 ml of PVA 0.1%. The reaction process was continued while stirring overnight in order to evaporate the organic solvent. The NPs as final products were obtained by centrifugation (at 18000 rpm for 20 min), washed three times with distilled water to remove excess surfactant, and finally lyophilized (Hashemi et al., 2021).
Characterization of the synthesized NPs
Particle size (diameter (nm)) and ζ-potential (surface charge) and polydispersity index (PDI) of NPs were determined by laser light scattering (Zetasizer Nano ZS 3000 HS, Malvern, UK). The morphology of NPs was monitored using the atomic force microscope (AFM) (Park Scientific, Inc., Sunnyvale, CA).
Determination of encapsulation efficiency (EE%) and loading content (LC%) in PLGA NPs
For evaluating the encapsulation efficiency and GBA-loading content, PLGA/GBA NPs (1 mg) were dissolved in acetonitrile (1 ml) and sonicated for 5 min to completely degradate the PLGA matrix. After centrifugation, the supernatant was collected and the GBA concentration was measured at 325 nm by UV-Vis spectrophotometry (UV-160A, Shimadzu, Japan) (Ebrahimian et al., 2017;Afsharzadeh et al., 2019). The GBA encapsulation efficiency and the loading content were calculated via the following equations: LC (%) =Mass of GBA in NPs/ Mass of GBA -Loaded NPs × 100%. EE (%) = Amount of GBA in NPs/Amount of GBA used for encapsulation × 100%.
In vitro release of GBA from PLGA/GBA NPs
In vitro release of GBA from PLGA/GBA NPs was investigated using centrifugation method. PLGA NPs suspension (200 µl), containing GBA (40 µM), were added to PBS (800 µl, pH 7.4) or citrate buffer (800 µl, pH 5.5) and incubated at 37°C, at a fixed speed of about 100 rpm. Supernatant was collected at 1, 2, 4, 24, 48, 72, 96, 120 hr using centrifugation at 17000 g for 20 min (Ebrahimian et al., 2016;Mosafer et al., 2017). After each step, the supernatant was collected and replaced with the same amount of fresh buffer to keep the buffer volume unchanged and provide sink condition. The GBA concentration was measured at 325 nm by UV-Vis spectrophotometry. Experiments were performed in triplicate and the release data is shown as the cumulative percentage of GBA with respect to the primary content of GBA in the NPs versus time.
Cell lines
Mesenchymal stem cells were isolated from the adipose tissue of healthy human according to our previously published work and cultured in L-DMEM medium (Gibco, USA) containing 10% FBS (Gibco, USA), penicillin (100 IU/ml), streptomycin (100 μg/ml) (Azimifar et al., 2021). All the processes were approved by Mashhad University of Medical Sciences review committee (Approval number IR.MUMS.SP.1396.116). C26 (Mouse Colon Carcinoma) and NIH/3T3 cells were purchased from Pasteur Institute of Tehran, Iran and cultured in RPMI medium containing FBS (10%) and antibiotics (1%). All cells were cultured at 37°C in a humidified incubator containing 5% CO 2 /95% air.
Evaluation of hMSC surface markers by flow cytometry
Expression of human MSC antigens (CD90 and CD44) and the absence of blood cell markers (CD45 and CD34) were assessed by flow cytometry, using a FACS Calibur instrument (Becton Dickinson) based on the manufacturer's instruction (L Ramos et al., 2016).
Osteogenic and adipogenic differentiation of hMSCs
The osteogenic and adipogenic differentiation potential of isolated MSCs was assessed via induction in differential osteogenic and adipogenic medium separately. The medium was changed every 2-3 days. After 21 days, osteogenic differentiation was investigated by staining the cells with Alizarin Red S solution (Santa Cruz, CA) to observe calciumnodule deposits. For adipogenic differentiation, lipid droplets were stained by Oil Red O solution (Santa Cruz, CA) (Tayarani-Najaran et al., 2021).
Loading PLGA/GBA NPs into hMSCs
hMSCs (10 5 cells/ml) were incubated as a single cell suspension in serum-free DMEM medium with PLGA/GBA (20 and 40 µM of GBA) for 4 hr at 37 o C. Then, hMSCs suspension was centrifuged at 1500 rpm for 5 min and supernatant was collected. PLGA/GBA NPs loading into hMSCs was determined through indirect method. For this purpose, the obtained supernatant was centrifuged at 14000 rpm for 20 min and sedimented pellet was lysed by adding acetonitrile (200 μl). Then, methanol (400 μl) was added and the mixture was centrifuged for 15 min at 17000 rpm. At the end, the final supernatant was analyzed for GBA content using UV-Vis spectrophotometry at 325 nm (Wang et al., 2018).
Release from nano-engineered hMSCs
hMSCs (5 × 10 4 cells) incorporated with PLGA/GBA NPs (20 and 40 µM of GBA) were suspended in 50 ml FBS-free DMEM medium containing 0.1% tween 80, and incubated at 37 o C. At predetermined time points (1, 2, 4, 24, 48, 72, 96, 120 and 144 hr), cells were centrifuged at 1000 rpm for 5 min and supernatant (450 μl) was removed and replaced with the same amount of fresh medium. Collected supernatants were stored at 4°C and examined by UV-Vis spectrophotometry at 325 nm. The percentage of GBA release from engineered hMSCs at each time point was calculated and plotted as described in section 2.3.5 (Zhao et al., 2017).
Cytotoxicity of the synthetized NPs
The cytotoxicity of GBA and PLGA/GBA NPs on hMSCs, C26 and NIH/3T3 cells was assessed using MTT assay. Cells (5×10 3 cells/well) were seeded in 96-well plates and incubated overnight in a humidified incubator. Then, GBA and PLGA/GBA NPs were added at concentrations of 1.25-40 µM of GBA to hMSCs and 20-120 µM of GBA to C26 and NIH/3T3 cells. Untreated cells were used as control group. After 72 hr, cells were washed with PBS and treated with 20 µl of MTT (5 mg/ml in PBS) solution for 4 hr. The crystals formed were dissolved by 100 µL of dimethyl sulfoxide (DMSO). The absorbance was measured at 570 nm with a reference wavelength of 630 nm by an Infinite® 200 PRO multimode microplate reader (Tecan Group Ltd, Männedorf, Switzerland) (Salmasi et al., 2018;Hashemi et al., 2022).
In vitro migration assay
The tumor-tropism capacity of naive and engineered hMSCs was assessed using a 24-well Transwell plate (PET membrane, 8 μm pore size, Corning). In Transwell plate, C26 and NIH/3T3 cells (2 × 10 4 cells/well) were seeded in the bottom chamber. After 24 hr, engineered and naive hMSCs (4 × 10 4 cells) were suspended in serum-free DMEM medium and added to the top chamber of the Transwell plate. Implanted cells were incubated at 37ºC to investigate migration through the membrane, overnight. After that, to assay the cell migration, cells remained on the upper side of top chamber were removed carefully and cells migrated to the lower side of top chambers were fixed with methanol and stained with Giemsa solution. Stained cells were observed and counted under an inverted microscope (five fields of view, at 10× magnification).
In vitro evaluation of antitumor activity of nano-engineered hMSCs
The anti-tumor effect of nanoengineered hMSCs was monitored using co-culture assay. Cancerous C26 and normal NIH/3T3 cells (2 × 10 4 ) were seeded at the bottom chambers of the Transwell plate (PET membrane, 0.4 μm pore size, Corning). After 24 hr, naive hMSCs and nano-engineered hMSCs at different concentrations of GBA (20, 40 and 80 µM) were added to the top chambers of the Transwell plate. Untreated cells were used as control group. After 72 hr incubation, the viability of C26 and NIH/3T3 cells was determined by MTT assay as described in section 2.2.10.
Statistical analysis
Statistical analysis was conducted by GraphPad Prism 8 software (GraphPad software, CA, USA). Data are presented as means±SD of triplicates and comparison among the different groups was made by one-way ANOVA followed by Student-Newman-Keuls assuming equal variance in two groups. The level of statistical significance in all analyses was set at p<0.05.
Physicochemical
properties of PLGA/GBA NP Particle size, polydispersity index (PDI) and zeta potential of PLGA and PLGA/GBA are presented in Table 1. The encapsulation efficiency (EE%) and drug loading content (LC%) related to GBA into PLGA NPs were 71.2% and 3.9%, respectively. AFM image illustrated that PLGA NPs had spherical morphology with uniform distribution and average size of about 200 nm (Figure 1). In vitro release of GBA from PLGA/GBA NPs In vitro release of GBA from PLGA/GBA NPs is shown in Figure 2. GBA released from PLGA NPs (40 µM, GBA-equivalent) during the first day in PBS buffer (pH 7.4) was only 21% followed by sustained release with approximately 50% of GBA release in 120 hr. In the citrate buffer with acidic pH (which represents the acidic environment around the tumor or inside the lysosome), during the first day, 80% of the GBA loaded into the NPs was released at high speed and then the GBA release process was constant. By the fifth day, almost all of the GBA was removed from the NPs.
Evaluation of surface markers of hMSCs extracted from adipose tissue
Expression of surface antigens on extracted hMSCs (at passage 3) was studied using flow cytometry. As shown in Figure 3, these cells were positive for hMSC markers (CD44 and CD90) and negative for hematopoietic markers (CD45 and CD34).
Differentiation capacity of hMSCs
Multilineage differentiation potential of isolated hMSCs at passage 3 was verified by treating them in induction mediums for three weeks. As shown in Figure 4, Alizarin Red S ( Figure 4A), and Oil Red O staining ( Figure 4B) revealed the successful differentiation of hMSCs.
Loading of PLGA/GBA NPs into hMSCs
The internalization of PLGA/GBA NPs into hMSCs suspension was explored using UV-Vis spectrophotometry at 325 nm through indirect method. The intracellular uptake was estimated about 80 and 92% for GBA 20 and 40 μM, respectively, after 4 hr incubation.
Release from nano-engineered hMSCs
As shown in Figure 5, after 72 hr, release of GBA was 45.06 and 29.80% for 20 and 40 µM GBA-equivalent, respectively followed by sustained release of GBA approximately 60% and 50% in 144 hr. Cytotoxicity of GBA and PLGA/GBA NPs against hMSCs, C26 and NIH/3T3 cells hMSCs viability, assessed by MTT assay, following exposure to GBA and PLGA/GBA NPs is illustrated in Figure 6 and 7 respectively. hMSCs survival after 72 hr incubation with different concentrations of GBA and PLGA/GBA (1.25-40 µM, GBA-equivalent) was not affected. At concentration of 40 µM, cell viability in GBA-treated cells decreased while hMSCs treated with PLGA/GBA showed 80% viability, suggesting that PLGA/GBA was non-toxic to hMSCs. As shown in Figure 8, significant cytotoxicity was observed for PLGA/GBA NPs, compared to GBA, at concentration of 80 and 120 µM in C26 cells. Conversely, in NIH/3T3 cells, no considerable toxicity was evaluated in any concentrations of PLGA/GBA NPs.
In vitro tumor tropism of nanoengineered hMSCs
To follow the migration ability of naive hMSCs and nano-engineered hMSCs towards cancerous cells, the Transwell migration assay was performed.
As demonstrated in Figure 9, minimal migration towards NIH/3T3 cells as normal cells was observed in treated groups. Surprisingly, migration of hMSCs loaded with PLGA/GBA NPs through the membrane pores towards C26 in the bottom chamber, significantly increased in comparison to unloaded one (p<0.05), representing the tropism of loaded hMSCs toward tumor cells. Furthermore, tropism of both naive and nano-engineered hMSCs significantly decreased toward normal cells (NIH/3T3) (p<0.001). Figure 9. Naive and nano-engineered hMSCs migrated toward C26 (Mouse Colon Carcinoma) and NIH/3T3 (mouse fibroblast) cells. Mean migrated cells from five random fields were considered; ***p<0.001 and *p<0.05. Untreated cells were used as control group. hMSCs: human Mesenchymal stem cells.
In vitro cytotoxicity of nano-engineered hMSCs
To investigate the in vitro cytotoxic potential of nano-engineered hMSCs on tumor cells, they were added on the top chamber of a Transwell plate in concentration of 20, 40 and 80 μM GBAequivalent, while C26 and NIH/3T3 cells were in the bottom chamber. Results of MTT analysis revealed that nanoengineered hMSCs at concentration of 40 and 80 μM could reduce the viability of C26 cells after 72 hr. Conversely, NIH/3T3 cells survival was unaffected after 72 hr incubation with different concentration of nano-engineered hMSCs. Furthermore, C26 cells viability was not affected by naive hMSCs during 72 hr implying that hMSCs as the cellular vehicle had no effect on inhibiting or promoting cancer cells growth (Figure 10).
Discussion
Recently, many studies have focused on the chemoprotective properties of natural products with high effectiveness and low side effects. Galbanic acid (GBA), a major lipophilic compound of Ferula species roots, fights progression of tumor cells via inducing G1 and G2/M arrest, inhibition of the vascular endothelial growth factor (VEGF)-induced proliferation, and preventing hypoxia inducible factor-1α (HIF-1α) transcriptional activation via suppressing the EGFR/HIF-1α signaling pathway (Kim et al., 2011;Zhang et al., 2012;Eskandani et al., 2015;Oh et al., 2015;Gharedaghi Kloucheh et al., 2021). However, poor solubility and poor bioavailability of GBA in aqueous media limited its clinical applications. Therefore, many studies have focused on development of nano-formulations for improving its therapeutic efficiency (Nik et al., 2019;Afsharzadeh et al., 2020).
Among the various approaches, polymeric carriers have been noted for their great properties including high stability and transport of both hydrophobic and hydrophilic drugs and active ingredients (Afsharzadeh et al., 2020).
Here, we used PLGA NPs to incorporate GBA, as an anti-tumor agent, in hMSCs. Biodegradable/biocompatible PLGA NPs were used with the aim of improving the solubility and chemical stability and to enhance the bioavailability of GBA (Ding and Zhu, 2018). PLGA NPs present several advantages such as being biodegradable, biocompatible, nonimmunogenic and non-toxic (Semete et al., 2010). Therefore, these properties make PLGA NPs suitable for stem cell engineering.
On the other hand, low targeting efficiency of NPs restricts their applications in cancer therapy (Zhang et al., 2016). Cell-based targeting approaches using MSCs have shown potent tumorhoming potential in response to proinflammatory cytokines in tumor microenvironment (Chulpanova et al., 2018). Moreover, it has been documented that MSCs have low immunogenicity and a positive safety for in vivo studies and clinical trials (Huang et al., 2020).
As it was reported, MSCs have been engineered for delivery of chemotherapeutic drugs such as paclitaxel, gemcitabine and doxorubicin. Commonlyused materials in the nanoparticleengineered MSCs include polymeric micelles, mesoporous silica, dendrimers and PLGA (Li et al., 2011;Tripodo et al., 2015;Wang et al., 2018). Incorporation of GBA into PLGA NPs increases drugloading capacity of MSCs warranting that a therapeutic dose of the GBA is released at the tumor site (Vallet-Regí et al., 2018). Wang et al. loaded bone-marrow-derived MSCs with paclitaxel (PTX) -PLGA NPs and explored their application against glioma cancer in the Transwell system (in vitro) and rats (in vivo). The PTX-PLGA NP-loaded MSCs treatment increased sustained PTX release in both form of free paclitaxel and paclitaxel NPs. In addition, as expected, the survival time of orthotropic brain-tumor rats compared to free PTX or PXT-PLGA increased (Wang et al., 2018). In our investigation, PLGA NPs containing GBA were prepared by single emulsion solvent evaporation method. The loading efficiency (EE %) and drug loading (LE%) of GBA in PLGA NPs were 71 and 3.9%, respectively. The loading efficiency of GBA in PLA-PEG NPs was about 40% of that reported by Afsharzadeh et al. (Afsharzadeh et al., 2019).
PLGA NPs displayed an initial release about 18% in the first 4 hr at pH 7.4, while it was about 40% at pH 5.5. After 24 hr, around 80% of GBA was released from PLGA NPs at pH 5.5, while it was about 21% at pH 7.4 followed by steady release until 120 hr when the GBA release was about 50%, suggesting that it has good stability in blood circulation. The initial burst has been reported in other studies to be due to the hydrophobic drug molecules located on or near the surface of the NPs.
Drug released from PLGA/GBA NPs loaded hMSCs (nano-engineered hMSCs) during 72 hr was 45.06 and 29.80% for 20 and 40 µM, GBA-equivalent, respectively followed by sustained release of GBA approximately 60 and 50% in 144 hr. These results could ensure the sustained release of GBA from nano-engineered hMSCs into systemic circulation.
It is important that loading of GBAcontaining NPs does not reduce hMSCs viability or their properties including migration ability. The cytotoxicity of GBA and PLGA/GBA against hMSCs was also evaluated by MTT test. Results showed that PLGA/GBA NPs were non-toxic to hMSCs at different concentrations of 1.25-40 µM (GBA-equivalent) as the majority of hMSCs were viable subsequent to loading. Viability of hMSCs was increased when treated with PLGA-encapsulated GBA compared to free GBA. This is in good agreement with other reports that MSCs loaded with NPs maintained their viability and inherited characteristic such as proliferation, migration and tumorlocalizing capacity (Paris et al., 2016;Paris et al., 2017;Labusca et al., 2018). In next step, the migration capacity of nanoengineered hMSCs, as an important MSCs property, toward cancerous C26 cells and normal NIH/3T3 cells was evaluated. Surprisingly nano-engineered hMSCs had higher ability to infiltrate C26 cells in comparison to naive hMSCs. Our results shared a number of similarities with Wang et al study (Wang et al., 2018). Their study revealed that there was no significant difference between the numbers of migratory MSCs treated with lowconcentration PTX-PLGA NPs and unloaded MSCs.
The ability of nano-engineered hMSCs to suppress cancer cells was evaluated in colon carcinoma cell line (C26 cells). Nano-engineered hMSCs could effectively induce cell death in C26 cells compared to non-engineered hMSCs, indicating that MSCs as a cellular vehicle had no effect on inhibiting or promoting cancer cells growth. It can be expected that this cellular carrier could efficiently target tumor cells in animal models of cancers and increase tumor homing. This is consistent with previous studies demonstrating that nanoengineered hMSCs resulted in greater tumor inhibition in different types of cancers (Yao et al., 2017;Zhao et al., 2017;Wang et al., 2019).
Although long-term studies are needed for further evaluation of this system's efficacy, our current study showed that nano-engineered hMSCs had great ability to migrate toward cancer cells and they can serve as an efficient cellular carrier for targeted drug delivery to tumor cells. Future in vivo studies can investigate the efficacy of these carriers in animal tumor models.
In the present study, MSCs were isolated from human adipose tissue and for the first time, and loaded with GBA containing PLGA NPs (PLGA/GBA NPs) to construct a cellular carrier to suppress cancer cells. The viability of hMSCs and their important feature, ability to migrate toward cancer cells, were found unaffected after PLGA/GBA loading. hMSCs carrying PLGA/GBA NPs (nanoengineered hMSCs) were shown to be efficient in killing C26 colon cancer cells in vitro in a dose-dependent manner.
Our study indicated that nanoengineered hMSCs could be considered a promising cellular carrier for targeted delivery of anti-cancer therapeutics. | 2022-10-01T05:10:42.639Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "13d05ca6c34be6d325bdd2484b8d9c5b3db2ebfa",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "13d05ca6c34be6d325bdd2484b8d9c5b3db2ebfa",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
4611161 | pes2o/s2orc | v3-fos-license | Anastomotic stoma coated with chitosan film as a betamethasone dipropionate carrier for peripheral nerve regeneration
Scar hyperplasia at the suture site is an important reason for hindering the repair effect of peripheral nerve injury anastomosis. To address this issue, two repair methods are often used. Biological agents are used to block nerve sutures and the surrounding tissue to achieve physical anti-adhesion effects. Another agent is glucocorticosteroid, which can prevent scar growth by inhibiting inflammation. However, the overall effect of promoting regeneration of the injured nerve is not satisfactory. In this regard, we envision that these two methods can be combined and lead to shared understanding for achieving improved nerve repair. In this study, the right tibial nerve was transected 1 cm above the knee to establish a rat tibial nerve injury model. The incision was directly sutured after nerve transection. The anastomotic stoma was coated with 0.5 × 0.5 cm2 chitosan sheets with betamethasone dipropionate. At 12 weeks after injury, compared with the control and poly (D, L-lactic acid) groups, chitosan-betamethasone dipropionate film slowly degraded with the shape of the membrane still intact. Further, scar hyperplasia and the degree of adhesion at anastomotic stoma were obviously reduced, while the regenerated nerve fiber structure was complete and arranged in a good order in model rats. Electrophysiological study showed enhanced compound muscle action potential. Our results confirm that chitosan-betamethasone dipropionate film can effectively prevent local scar hyperplasia after tibial nerve repair and promote nerve regeneration.
Introduction
Peripheral nerve lesions are common and severe injuries that impact 2.8% of traumatic patients annually, and result in lifetime disability if unattended (Belkas et al., 2004). Nowadays, various methods are used to guide regenerating nerve fibers into the correct distal endoneurial tubes during surgical repair. The most-used strategies developed for nerve repair include end-to-end suturing of nerve stumps and bridging by autografts (Isaacs et al., 2008). However, a ma-jor problem for nerve repair is the formation of fibroblastic scars at the site of neuroanastomosis (Ngeow, 2010). Even with well repaired nerve, half of the regenerating axons may grow into scar tissue, which leads to local neuroma and impedes axonal regeneration to the target (Sedy, 2010). Consequently, regenerating nerve function is generally far from satisfactory.
Thus, production of fibroblastic scars during nerve anastomosis impedes the regeneration of repaired nerves.
Nowadays, two kinds of treatments are used. Biological agents that separate the site of nerve anastomosis from the surrounding tissue to achieve the effect of physical protection, namely sodium hyaluronate and poly (D, L-lactic acid) (PDLLA) (Ganguly et al., 2004;Takezawa et al., 2007), while another type of agent is glucocorticosteroid, which prevents scar growth by inhibiting inflammation (Grauer et al., 2001). Combining these strategies with biological or synthetic materials for repairing damaged peripheral nerves is of utmost interest. Among them, synthetic tubes seeded with Schwann cells to facilitate axonal regeneration are the most widely used (Ghaznavi et al., 2011;Joseph et al., 2011). Unfortunately, Schwann cells secrete major histocompatibility complex class I, which occupies the most gene-dense region of the mammalian genome and plays a key role in immune system recognition and transplantation. Consequently, there has been controversy concerning the use of allogeneic Schwann cells for nerve repair owing to their specific immunogenicity. Recently, synthetic tubes seeded with aligned neural stem cells were found to promote axonal regeneration of a nerve defect in a rat model (Hsu et al., 2009). However, biological agents are barely usable because of rapid degradation and absorption in the body, which means there is no long-term practical application of transplanted biological agents (Eillitz-Markus et al., 2012).
Betamethasone dipropionate is a glucocorticoid that is usually for external use or injection. Betamethasone dipropionate plays an important role in Th lineage development (for instance favoring generation of Th2 cells), downregulation of fasL expression, and inhibition of activation-induced T cell apoptosis Paul et al., 2017). Glucocorticoids are also potent inducers of apoptosis, which can cause the death of CD4 + CD8 + thymocytes at concentrations achieved during the stress response (Lovato et al., 2016;Queille-Roussel et al., 2016).
Many drug delivery systems derived from chitosan have been developed (Janes et al., 2001;Mitra et al., 2001;Mi et al., 2002;Abarrategi et al., 2008), which can be released gradually to improve treatment effect. To develop optimal drug activity, the quality and shape of the carrier have a pivotal effect. In our study, a flexible flat chitosan sheet was produced by a simple technique, and we subsequently observed chitosan biodegradation and drug (betamethasone dipropionate) release from the chitosan sheet. This system will enable us to investigate the feasibility of chitosan as a drug delivery system for repairing peripheral nerve lesions.
Animals
Seventy-two male Wistar rats aged 5-6 weeks and weighing 120 g were obtained from the Experimental Animal Center of Chinese Academy of Sciences (SYXK (Hu) 2013-0062). After anesthesia, the surgical area (right leg of each rat) was shaved and disinfected. The tibial never was cut 1 cm above the knee, and nerve stumps sutured immediately under a microscope (Yohn et al., 2008). Sheets (0.5 cm × 0.5 cm) were coated in nerve anastomosis. Rats were placed in individual cages and allowed free access to food and water.
The study protocol was approved by the Animal Ethics Committee of Hangzhou Plastic Surgery Hospital of China (approval number: LY12H05005). The experimental procedure followed the United States National Institutes of Health Guide for the Care and Use of Laboratory Animal (NIH Publication No. 85-23, revised 1986).
All rats were randomly divided into three groups before treatment: control group (n = 24; treatment by direct suture after nerve transection); PDLLA group (n = 24; PDLLA was coated in nerve anastomosis); chitosan-betamethasone dipropionate film group (n = 24; chitosan sheets with betamethasone dipropionate were coated in nerve anastomosis).
Chitosan sheet formation and betamethasone dipropionate incorporation
Depolymerization was implemented by agitating chitosan solution with 15 mL of 31.36 mM KNO 2 for 2 hours at 35°C. Acetone was added to depolymerized chitosan to stop the reaction. A sterile 0.22 μm filter was used to filter 1% (w/v) chitosan solution (50 mM acetic acid), which was possible because of the water and ash content. Next, 200 μL aliquots were layered over 1 cm 2 surface dishes. The solvent was allowed to evaporate from open plates in a sterile laminar flow hood overnight at room temperature. Sheets were incubated with buffer phosphate (0.25 M, pH 7.0), and extensively washed with phosphate buffered saline (PBS). Betamethasone dipropionate 1 mL (1 g/mL) was added to the initial chitosan solution when necessary.
Swelling studies
After formation of sheets, dry sheets were weighed and im-mersed in PBS at 37°C under continuous agitation. Surface PBS was removed using dry tissue and the sheets weighed again. Swelling percentage was measured as follows: S% = (wet sample weight -dry sample weight)/dry sample weight × 100.
Betamethasone dipropionate diffusion assay
Betamethasone dipropionate concentration was calculated by high-performance liquid chromatography (HPLC). An HPLC column with a fluorescence detector (F1050; Hitachi, Tokyo, Japan) was used for analysis. The stationary phase was a normal phase column (Mightysil Si60; Kanto Kagaku, Tokyo, Japan), and the mobile phase was chloroform-isopropanol-acetic acid-water-sodium acetate buffer (100:100:14:14:1, pH 4.5) at a flow rate of 1.0 mL/min. Fluorescent signals were surveyed at 470 nm excitation and 585 nm emission. Betamethasone dipropionate was extracted by chloroform/methanol (4:1) and then centrifuged at 15,000 r/min for 15 minutes. The phase-separated chloroform/ methanol layer was analyzed by HPLC. Pure betamethasone dipropionate was used as the standard. The content of betamethasone dipropionate absorbed by the chitosan sheets was 1 μg/mg dry chitosan. Fluorescence microscopy was used to confirm absorption of betamethasone dipropionate.
Sheet bioactivity assays
Sheets were treated as described above. L929 fibroblasts were purchased from Invitrogen Corp. (Carlsbad, CA, USA). Cells were cultured in RPMI-medium 1640 containing 15% fetal bovine serum, L-glutamine, and penicillin-streptomycin at 37°C in an atmosphere of 5% CO 2 . The culture medium was replaced every 3 days. Next, cells were layered onto sheets at a density of 2 × 10 3 cells/well in 96-well tissue culture clusters (Corning Costar, New York, NY, USA). At specific time intervals, six samples from each sheet group were gently rinsed twice with sterile PBS solution to remove dead cells, and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assays performed to quantify cell viability.
Sheet stability at different pH values
Dry sheets were saturated in different pH buffer solutions from pH 3 to pH 7.4 at 37°C. Samples were taken at selected time points. Chitosan content was determined by colorimetric assay (Cibacron Brilliant Red 3B-A, TX, USA) and shown as a percentage of total chitosan amount initially present in the sheet.
In vitro enzymatic degradation
In vitro degradation of chitosan films (1 cm 2 performed on 48-well plates) was performed with 500 μg/mL lysozyme (hen egg white; Sigma) in 1 mL PBS (pH 7.4) at 37°C. At selected time points, the enzyme solution was replaced every other day, and then 1 mL of acetic acid mixed to dissolve the remaining sheet. Sequentially, enzymatic degradation of chitosan sheets was performed in 1 mL of 0.2 M acetate buffer solution (pH 5.4) by the same procedure. An automated microviscometer (Anton Paar, Graz, Austria) was used to measure dynamic viscosity of the solution.
Evaluation of betamethasone dipropionate retained in sheets after enzymatic hydrolysis
In vitro enzymatic degradation of chitosan sheets was performed as described above. The remaining film was dissolved in acetic acid at exposition days 1, 4, 7, 11, and 14. Betamethasone dipropionate was calculated by colorimetric assay as previously described (see Sheet stability at different pH values).
Immunohistochemistry
The animals were killed by anesthesia at 12 weeks after surgery. Tissue samples of the tibias nerve were taken, and the specimens preserved in 2.5% glutaraldehyde and embedded in araldite. Serial 20 μm-thick longitudinal sections were taken through the suture site of the tibias nerve. Sections were rinsed twice with 0.01 M PBS, followed by immunohistochemical staining of nerve fibers using mouse anti-neurofilament (NF) monoclonal antibody (dilution 1:200; Abcam, Cambridge, MA, USA) to evaluate axonal regeneration at the suture site. Afterwards, sections were treated at 4°C overnight and washed three times with cold PBS. Then, fluorescent secondary antibody fluorescein isothiocyanate (goat anti-mouse IgG-TITC, dilution 1:100; Santa Cruz Biotechnology, Santa Cruz, CA, USA) was added for 2 hours. Sections were examined with a BX53 fluorescence microscope at 100× magnification.
Electrophysiological evaluation
At 8 and 12 weeks after surgery, electrophysiological recordings were performed using Medelec Synergy (Viasys Healthcare Inc., Conshohocken, PA, USA) in a quiet room at 22-23°C. All recordings were performed with a subcutaneous needle electrode. Nerve conduction tests of nerve fibers were performed, including motor nerve conduction velocity (MNCV) and compound muscle action potential (CMAP). For CMAP, the active electrode was inserted to the middle of gastrocnemius, the reference electrode on the muscle tendon in the silent region of the distal extremity, and a ground electrode externally on the animal's thorax. The tibial nerve was stimulated at the tibial notch 1.5 cm proximal to the injured region by adjusting the distance between the cathode and anode to 1 cm. Supramaximal pulses (usually 0.05 ms in duration) were delivered. To reduce artifacts, stimulus regeneration. Neural Regen Res 13 (2): 309-316. doi:10.4103/1673-5374.226401 Dried and neutralized sheets were soaked in phosphate buffered solution at 37°C. Swelling rate was determined by gravimetric test. Data demonstrate percentage of swelling (mean ± SD, n = 24, non-linear regression fitting analysis). The experiment was performed in triplicate.
Figure 2 Drug release test in phosphate buffered saline at 37°C.
Films incorporating drug were immersed in phosphate buffered solution and aliquots collected over the time course. Afterwards, released drug content was measured. Remaining drug content was measured by dissolution of the remaining film (mean ± SD, n = 24). The experiment was performed in triplicate.
Figure 4 Sheet stability at several pH values at 37°C.
Sheets were immersed in several pH values [pH 3, pH 4, pH 5, pH 6, and phosphate buffered saline (pH 7.4)] as shown, and the amount of chitosan measured. The results are shown as percentage of total chitosan in the film, and were used in an exponential association model (mean ± SD, n = 24). The experiment was performed in triplicate. Control group: Treatment by direct suture after nerve transection; PDLLA group: PDL-LA was coated in nerve anastomosis; chitosan-betamethasone dipropionate film group: chitosan sheets with betamethasone dipropionate were coated in nerve anastomosis. PDLLA: Poly (D, L-lactic acid).
Figure 5 General examination at 12 weeks after surgery.
(A) Tibial nerves adhered to scars but with restricted movement in the control group because of markedly proliferated scars (control group). (B) Scar formation was not significant in the PDLLA group. (C) Chitosan-collagen betamethasone dipropionate film had completely degraded and scar proliferation surrounding the tibial nerve was barely seen in the chitosan-betamethasone dipropionate film group. Arrows: Adhered tibial nerves. Control group: Treatment by direct suture after nerve transection; PDLLA group: PDLLA was coated in nerve anastomosis; chitosan-betamethasone dipropionate film group: chitosan sheets with betamethasone dipropionate were coated in nerve anastomosis. PDLLA: Poly (D, L-lactic acid).
Figure 6 Immunohistochemical staining of nerve fiber (fluorescence microscope).
Immunohistochemical staining (mouse anti-neurofilament monoclonal antibody) at the suture site (longitudinal section) was performed at 12 weeks after surgery. (A) Control group: Regenerative nerve fibers were twisted and had poor alignment (arrows). (B) PDLLA group: Regenerative nerve fibers were increased and well-arranged. (C) Chitosan-betamethasone dipropionate film group: Regenerative nerve fibers were straight with more fibers than in the control group and PDLLA group. Scale bar: 1 µm. Control group: Treatment by direct suture after nerve transection; PDLLA group: PDLLA was coated in nerve anastomosis; chitosan-betamethasone dipropionate film group: chitosan sheets with betamethasone dipropionate were coated in nerve anastomosis. PDLLA: Poly (D, L-lactic acid). intensity was held at the lowest level (1 mA) according to the muscle reaction. The filter was set at 1 Hz-5 kHz, sweep speed was 1 ms/div, and sensitivity was 0.5 mV/div. Latency was calculated from the onset of CMAP, with conduction velocity of the fastest fiber given. Baseline to peak amplitude was also measured; this shows the numbers of fibers activated by nerve stimulation.
Statistical analysis
Each test was performed in triplicate. The results are shown as the mean ± SD. Differences between more than two groups were calculated by analysis of variance followed by Bonferroni post hoc test with SPSS 18.0 software (IBM, Armonk, NY, USA). GraphPad Prism 6.0 software (GraphPad Company, San Diego, CA, USA) was used for non-linear regression fitting analyses. A value of P less than 0.05 was considered statistically significant.
Characteristics of chitosan sheets
We assumed that once the sheets became hydrated, the swelling medium would absorb betamethasone dipropionate. In this study, we used PBS as the hydrating medium. Swelling rate was calculated by weighing each sample before and after immersion at 37°C for 24 hours. After soaking in PBS, neutralized and dried chitosan sheets immediately swelled (within approximately one minute), which was constantly retained over time (Figure 1). Non-linear regression analysis showed that the constant rate was 2.073 ± 0.352 min -1 (half-time of approximately 0.36 minutes). Further, mechanical integrity of the films was retained in optimum condition until the test was finished. Nonetheless, sheets that were not subjected to the neutralization process dissolved quickly, as expected (data not shown). Films reached their maximum swelling after a few minutes. Hence, the delivery system was ready for drug release by a diffusion mechanism.
Drug release assay
To investigate drug release, we used several different experimental methods. First, we measured drug quantity released by diffusion as a function of contact time. We dissolved the sheet and measured the drug remaining at the end of the experiment (7 days of contact exposure). Bradford micro assay (Figure 2) showed that the drug was not detected before two consecutive days, and was released after five consecutive days. The drug quantity released over the time course was close to the detection limitation. Even after 7 days, almost 85% of the initial drug remained in the sheet. Furthermore, MTT assay showed that in vitro L929 cell viability was not significant in the control, PDLLA, and chitosan-betamethasone dipropionate film groups (Figure 3). Dissolution of the carrier itself in the biological environ-
Figure 7 Electrophysiological nerve indices after repair with chitosan film and betamethasone dipropionate.
Electrophysiological study at 8 and 12 weeks after surgery. CMAP in the chitosan-betamethasone dipropionate film group was markedly higher than in the PDLLA and control groups, while CMAP was higher in the PDLLA group than the control group. Asterisks show statistically significant differences. *P < 0.05 (mean ± SD, n = 24, analysis of variance followed by Bonferroni post hoc test). I: Control group: Treatment by direct suture after nerve transection; II: PDLLA group: PDLLA was coated in nerve anastomosis; III: chitosan-betamethasone dipropionate film group: chitosan sheets with betamethasone dipropionate were coated in nerve anastomosis. MNCV: Motor nerve conduction velocity; CMAP: compound muscle action potential; PDLLA: poly (D,L-lactic acid). ment may be another possibility for drug delivery. We examined dissolution of chitosan sheets at different pH values, which ranged from acidic to normal physiological pH values (pH 3, 4, 5, 6, and 7.4). The results indicate that dissolution was faster at lower pH values, with the sheet totally dissolved at pH 3 after 10 hours, while it was unaffected when exposed to neutral pH values (Figure 4).
Chitosan-collagen betamethasone dipropionate film promotes nerve repair Next, we examined the effect of chitosan-collagen betamethasone dipropionate films in repair of nerve injury. As described in the Materials and Methods, the tibial nerve was cut at the site 1 cm above the knee. Nerve stumps were sutured under microscopy and wounds treated with either chitosan-collagen betamethasone dipropionate film or PDLLA. We observed drug retention, scar formation, and nerve adhesion to scars at 4, 8, and 12 weeks after surgery. At 4 weeks after surgery, the shape of the chitosan-collagen betamethasone dipropionate film was normal, while it was significantly degraded with PDLLA (data not shown). At 8 weeks after surgery, the chitosan-collagen betamethasone dipropionate film was visibly degraded (data not shown). At 12 weeks after surgery, treatment with the chitosan-collagen betamethasone dipropionate film showed an improved effect on nerve repair compared with the control group ( Figure 5).
We found there was much less adhesion to the surrounding tissue in the chitosan-collagen betamethasone dipropionate film group. In all experimental groups, the incision healed well without secondary infection.
Histological assay for nerve regeneration
We performed immunocytochemical staining of nerve fibers for morphological observation. We found more nerve fibers with straighter growth in chitosan-collagen betamethasone dipropionate film-treated rats (Figure 6).
Electrophysiological study of nerve function
We determined whether nerve fiber function was improved in chitosan-collagen betamethasone dipropionate film-treated rats compared with the control group. Electrophysiological study showed that chitosan-collagen betamethasone dipropionate film treated-nerve fibers had improved function compared with the control group. At 8 weeks after surgery, CMAP was detected in all groups in response to stimulation of distal, proximal, or the conduit portion of nerve fibers. MNCV and CMAP amplitude in the control group were significantly lower compared with the PDLLA and chitosan-betamethasone dipropionate film groups (P < 0.05). However, there was no significant difference in MNCV or CMAP amplitude between PDLLA and chitosan-betamethasone dipropionate film groups (P > 0.05). At 12 weeks after surgery, there was no significant difference in MNCV between PDLLA and chitosan-betamethasone dipropionate film groups (P > 0.05), with a significant difference in CMAP amplitude (P < 0.05; Figure 7).
Discussion
Axons can regenerate successfully to restore functional connections in the peripheral nervous system (Benowitz et al., 2011;He et al., 2016;Hernandez-Morato et al., 2016). Formation of fibroblastic scars is a major problem in nerve repair, and inhibits regeneration of repaired nerves. Although scar formation is important for normal wound healing, scarring has side effects in many clinical situations (O'Kane et al., 1997). Scarring around a repaired peripheral nerve can significantly impede axonal sprouting and regeneration, which may lead to an unfavorable prognosis (Adams et al., 2016;Geuna et al., 2016;Levi et al., 2016).
Approximately half of regenerating axons may develop into scar tissue, even in well repaired nerve, which leads to local neuroma and impedes axonal regeneration to the target. Scarring is much worse in the event of any tension, which makes it almost impossible for axons to cross the nerve anastomosis and reach the distal part of the nerve. The reason why fibroblastic scar tissue is so uncompromising is unknown, especially as axons grow reasonably well on fibroblasts (Ozay et al., 2007;Albayrak et al., 2010;Park et al., 2011;Gocmen et al., 2012). Recently, studies have shown that endoneurial and perineurial fibroblasts (which are the main scar-forming cells) produce the proteoglycan, neural/glial antigen 2 (NG2), which is a considerable inhibitor of axonal regeneration (Hossain-lbrahim et al., 2007). The amount of glycosaminoglycan chain attached to NG2 increases dramatically in response to injury, which may lead to its inhibition. Further, there is a great increase in NG2 within scar domains blocking regeneration in injured human nerves (Fiorentino et al., 1991;Scherer et al., 1993;Kiefer et al., 1995;O'Keefe et al., 1999;Rezajooi et al., 2004;Bhatheja et al., 2006).
Chitosan is a natural aminopolysaccharide obtained by deacetylation of chitin that has been used as a drug-delivery vehicle because of its favorable biological properties including bioactivity, biocompatibility, positive charge, low immunogenicity, and biodegradability (Kumar et al., 2004;Cherukuri et al., 2017;Cho et al., 2017;. Various carriers that originate from chitosan have been developed as drug delivery systems. Chitosan-and anionic alginate-coated poly (d,l-lactide-co-glycolide) nanoparticles are suitable for delivery of bioactive resveratrol (Sanna et al., 2012). Encapsulation of resveratrol into optimized polymeric nanoparticles provides improved drug loading, effective controlled release, and protection against light-exposure degradation, thereby opening new perspectives for delivery of bioactivity-related phytochemicals for (nano) chemoprevention/chemotherapy (Sanna et al., 2012;Liu et al., 2017). Jayaraman et al. (2012) described the synthesis and use of an efficient nano-carrier molecule for retinal delivery of a nano-chitosan peptide, which was an excellent carrier for retinal drug delivery and had the potential to treat age-related macular degeneration. However, its clinical applications remain to be further investigated (Yu et al., 2012).
In our study, we developed a new method to delay scar formation. We used chitosan-collagen betamethasone dipro-pionate film to ensure the drug remained longer in wounds, and allowed the nerve fiber to sprout and regenerate effectively and fix the injured neural network. To investigate sheet characteristics, several different experimental methods were used. The sheets reached their maximum swelling in two minutes, which suggests they enable drug release through a diffusion mechanism. Almost 85% of the initial drug was retained in the sheet, even after 7 days. Furthermore, the MTT test was performed to quantify cell viability, and showed that the sheets did not influence cell proliferation in vitro. In physiological environments, wound acidification is part of the process as the wound-healing process begins with hemostasis and inflammation. Accordingly, we examined chitosan sheet dissolution at different pH values, which showed that a faster dissolution process at lower pH values, while it was unaffected when soaked at neutral pH values. These findings show that the sheets have favorable biological properties, the most important point being that they performed significantly better than the control group during the experimental procedure in vivo. Additionally, chitosan-collagen is harmless, bioabsorbable, and suitable for surgery.
Our data demonstrate that chitosan-betamethasone dipropionate film is a promising material to improve functional nerve fiber regeneration in surgery. | 2018-04-03T00:28:22.588Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "a6b1e9100cf24df663f9c9bd548c2ad223e58b34",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1673-5374.226401",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a6b1e9100cf24df663f9c9bd548c2ad223e58b34",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10797925 | pes2o/s2orc | v3-fos-license | Cheating-Resilient Incentive Scheme for Mobile Crowdsensing Systems
Mobile Crowdsensing is a promising paradigm for ubiquitous sensing, which explores the tremendous data collected by mobile smart devices with prominent spatial-temporal coverage. As a fundamental property of Mobile Crowdsensing Systems, temporally recruited mobile users can provide agile, fine-grained, and economical sensing labors, however their self-interest cannot guarantee the quality of the sensing data, even when there is a fair return. Therefore, a mechanism is required for the system server to recruit well-behaving users for credible sensing, and to stimulate and reward more contributive users based on sensing truth discovery to further increase credible reporting. In this paper, we develop a novel Cheating-Resilient Incentive (CRI) scheme for Mobile Crowdsensing Systems, which achieves credibility-driven user recruitment and payback maximization for honest users with quality data. Via theoretical analysis, we demonstrate the correctness of our design. The performance of our scheme is evaluated based on extensive realworld trace-driven simulations. Our evaluation results show that our scheme is proven to be effective in terms of both guaranteeing sensing accuracy and resisting potential cheating behaviors, as demonstrated in practical scenarios, as well as those that are intentionally harsher.
I. INTRODUCTION
The proliferation of mobile smart devices has promoted the development of Mobile Crowdsensing Systems (MCSs), a promising paradigm for agile, fine-grained, and economical sensing with prominent spatial-temporal coverage [1]. Exploring people-centric data collected by smart devices with enriched sensors (e.g. Global Positioning System, gyroscope and microphone), a growing number of MCS prototypes have been developed to support applications, including urban sensing [2], environmental monitoring [3] and mobile social networking [4].
Observing that the data source of MCSs is a set of personal mobile devices temporally recruited, the self-interested nature of mobile users needs to be taken into account for MCS implementations. From the perspective of mobile users considering the potential costs (e.g. physical labor, device battery life, and network bandage usage), participation is unlikely in an MCS sensing task unless there was a considerable payback under the rational person hypothesis. From the perspective of the MCS server, the credibility of reported observations from temporally recruited users is not guaranteed (i.e. users may cheat in sensing tasks just for the payback without reporting quality data, which is referred to as Cheating Behavior in this paper), even when it pays fairly for data acquisition. Intuitively, we can see that: (i) the MCS server needs to recruit credible users in sensing tasks, and (ii) honest responses deserve substantial reward while dishonest reporting requires reprimand. A mechanism satisfying these two requirements simultaneously is necessary for MCS implementations.
User Incentive schemes [5] aim at encouraging selfinterested users to participate in system tasks by rewarding monetary or tradable paybacks. Generally speaking, existing schemes in participatory systems model the user incentive process as an optimization problem for either the system server [6], [7] or users [8], [9] by designing mechanisms based on auction or game theory. Nonetheless, research on incentive mechanisms that considers active cheating behaviors from self-interested users is relatively limited. Schemes in [10] and [11] aim to guarantee the 'bidder's truthfulness' in designed auctions. Also, incentive schemes were proposed to evaluate the quality of user reports [12], [13]. However, these schemes cannot be directly deployed in highly dynamic and opportunistic MCSs. To stimulate the service time of MCS participators, a Stackelberg game-based incentive mechanism is proposed in [14], which maximizes the utility of the MCS platform and proves that a best strategy for all self-interested participators can be centrally determined. However, since the purpose of this work is to stimulate user participation, and no sensing data quality related factor is considered, it cannot solve the dishonest user reporting issue. An incentive mechanism that encourages quality data reporting is necessary to guarantee the usability of MCSs.
In this paper, we develop a novel Cheating-Resilient Incentive (CRI) scheme for MCSs, which guarantees the accuracy of crowdsensing tasks while encouraging mobile users to provide quality data without cheating for maximum paybacks. Our contributions are summarized as follows: • Based on the participation-driven incentivization in [14], we develop a reputation-driven method for the MCS server to recruit the most credible users autonomously according to their historical behaviors. Meanwhile, recruited users can obtain maximum paybacks only when they contribute no less than expected. We demonstrate the correctness of our design with theoretical analysis. • We introduce the truth discovery technique [15] into the user incentive issue to evaluate the actual contribution of recruited users in MCS tasks. The adaptive truth discovery guarantees the accuracy of crowdsensing while Fig. 1: System Architecture providing a baseline for user contribution evaluation. • Through extensive trace-driven simulations, we evaluate the performance of CRI. The simulation results validate the effectiveness of CRI with respect to both qualitydriven user stimulation and cheating behavior resistance. The remainder of the paper is organized as follows: In Section II, we present the MCS model and the description of the user incentive problem. In Section III, we present the CRI scheme in detail. In Section IV, we present the evaluation results of our designed scheme via extensive trace-driven simulations. Finally, we conclude this paper in Section V.
A. MCS Architecture
As shown in Figure 1, a general MCS consists of a cloud server s and a set of registered mobile users P = {1, 2, . . . , M }, where M 2. Any user i ∈ P can communicate with s via either cellular or WiFi access points.
For a specific sensing target, s can announce a task τ to all users in P. Receiving the announcement, any user i ∈ P who is interested in τ can reply with its reputation r i as an application, which reflects its behavior in historical tasks. According to all applications received, s can determine the total reward R and the set of applicants E = {1, 2, . . . , N } ⊆ P to be the final employees of τ . All employees ∈ E then conduct sensing obligations required by τ and report their observations O = {o 1 , o 2 , . . . , o N } to s. Based on all reports received, s can discover τ 's sensing truth o τ , and evaluate the contribution C = {c 1 , c 2 , . . . , c N } of all employees separately, which determines their paybacks G = {g 1 , g 2 , . . . , g N } for τ and corresponding reputation adjustments.
B. User Incentive Problem
The purpose of incentive mechanisms in MCSs is to stimulate mobile user activity through paybacks for participating in sensing tasks. As the decision maker, s prefers to maximize paybacks of the most contributive users as stimulation, while minimizing the effect of potential cheating behaviors (e.g. intentionally reporting random data just for the payback), to achieve accurate sensing.
To guarantee the quality of sensing reports, s needs to consider two issues: (i) the selection of employees for a given task should be based on the applicant's reputation, and (ii) the reputation adjustment and payback to an employee should be based on their contribution in the current task. Therefore, our user incentive scheme should solve the following problems: 1) How to define and manage the user reputation? 2) How to recruit employees based on their reputations? 3) How to evaluate an employee's contribution? 4) How to quantify and maximize an employee's payback according to its contribution?
III. THE CHEATING-RESILIENT INCENTIVE SCHEME
In this section, we develop the Cheating-Resilient Incentive (CRI) scheme to address the four problems outlined in Subsection II-B. Before the construction of CRI, we first provide the formal definitions of both user reputation and employee payback. Then, we present a mechanism for reputation-driven employee recruitment. After that, we address the issue of how to evaluate an employee's contribution and how to quantify and maximize an employee's payback according to its contribution.
A. Definitions
In this paper, we formally define user reputation and employee payback as follows.
User Reputation: Intuitively, we treat the reputation as a user's credibility for offering quality reports. In fact, once recruited, the user is responsible for providing an actual contribution to the sensing task in proportion to its reputation.
For any mobile user i ∈ P, the reputation of i, denoted as r i (0 r i 1), will be adjusted by s whenever i participates in a sensing task τ by where r ′ i denotes the reputation of i before it participates in τ ,c i denotes i's expected contribution to τ (estimated by s considering r ′ i , discussed latter), c i denotes i's actual contribution to τ (evaluated by s, discussed latter), and 0 α 1 determines the sensitivity of i's reputation adjustment. After the user's registration, s issues any i ∈ P a value r 0 as its initial reputation.
Employee Payback: Intuitively, it is natural to determine an employee's payback according to its contribution. Also, considering that the reputation of an employee only denotes the probability of its good behavior, we take the potential quality risk into account for the payback determination.
Inspired by the sensing time-driven model in [14], for an employee i ∈ E of task τ , we define its reputation-driven payback g i as where R > 0 denotes the total reward of τ . For a fixed R, g i is subjected to both the ratio of i's contribution, ci j∈E cj R, and the potential quality risk for recruiting i, 1−ri ri c i .
Based on the definitions above, we develop CRI, which consists of three components: (i) Employee Recruitment, (ii) Contribution Evaluation, and (iii) Payback Determination.
B. Employee Recruitment
We now address the issue of how to recruit employees based on their reputations. After the announcement of task τ , receiving all applications R = {r 1 , r 2 , . . . , r M } (see Subsection II-A), s prefers to recruit employees with top reputations. Nonetheless, according to Equation (2), it is possible that an applicant could obtain no payback if the total reward budget is low. Therefore, s needs to determine the final employees considering their expected paybacksḠ = {ḡ 1 ,ḡ 2 , . . . ,ḡ M }, which are determined by their expected contributionsC = {c 1 ,c 2 , . . . ,c M }.
For effective stimulations, it is necessary for s to maximize the expected payback that a well-behaving employee can have. According to Equation (2), it is easy to get that g i is secondorder continuous differentiable onc i , and the second derivative of g i onc i is:g Whenc i 0, there isg i (c i ) < 0, and g i (c i ) is a concave function. Therefore, g i (c i ) has a unique maximum value when |P| 2. We call such a valueḡ i as i's expected payback, which can be calculated wheneverġ i (c i ) = 0,c i > 0 has a solution.
Therefore, from the perspective of s, any i ∈ P that is finally recruited should satisfy following restrictions: To allow s to autonomously recruit as many credible users as possible with a fixed total reward, we develop Algorithm 1 based on the NE computation algorithm in the STD game [14] to determine final employees just based on their reputations. Here, all employees recruited will have the maximum payback only if they contribute as expected.
According to [14], we have Propositions 1 and 2 for Algorithm 1 as follows.
Proposition 1. Any i ∈ P that is not recruited by Algorithm 1 gets the maximum payback by not participating in τ . Proposition 2. Any i ∈ E that is recruited by Algorithm 1 can obtain the maximum payback only if it contributes as expected in τ .
Because of the page limitation, please refer to Theorem 1 and 2 in [14] for specific proofs.
Until now, s can recruit the most credible employees E and compute their expected contributionsC only based on all received applications R, andC is treated as one of the metrics for the payback determination.
C. Contribution Evaluation
In the following, we address the issue of how to evaluate an employee's contribution. After employee recruitment, s announces a detailed task description of τ to all i ∈ E. Because there is no ground truth in our MCS scenario, s needs to discover the sensing truth o τ , based on O, as the baseline to evaluate employees' actual contributions. Considering the potential conflict in reported observations and the difference in employee reputations, we develop Algorithm 2 based on the general truth discovery framework in [15] to allow s to compute o τ and to evaluate actual employee contributions C at the same time.
According to Algorithm 2, each actual contribution c i ∈ C is subjected to both the distance between o i and o τ , and i's reputation r i . We treat observations from employees with higher reputations as more credible, and an employee's contribution will be higher if its observation is closer to o τ . C is treated as another metric for the payback determination.
D. Payback Determination
We now answer the question about how to quantify and maximize an employee's payback according to its contribution. Based on the expected contributionsC and the actual contributions C, s can determine the final paybacks to guarantee thatc i is within the range of [0, 1].
According to Equation (2), we set Also, according to Equation (1), s updates all r i ∈ R E as: As demonstrated, CRI guarantees that (i) s can recruit a proper number of the most credible applicants for sensing tasks, and (ii) recruited users can only obtain the maximum paybacks when they contribute no less than expected. Cheating behaviors will reduce their paybacks and opportunities of being recruited in future tasks.
IV. EVALUATION
To validate the performance of CRI in real-world MCSs, we conducted extensive trace-driven simulations based on OMNeT++ 4.6, using real-world outdoor temperature data collected by taxis in Rome (hereinafter referred to as Rometrace) [16]. In the following, we first present the simulation settings, and then show the evaluation results.
A. Simulation Settings
According to Rometrace, we constructed an MCS with a cloud server and 366 registered users. All users possessed outdoor temperature data opportunistically collected within 24 hours. The server spontaneously announced temperature sensing tasks to the users. After receiving an announcement, a user who possessed data collected within ±60 seconds autonomously applied for the task, and then uploaded corresponding report if it was recruited. The server provided paybacks and updated employee reputations based on CRI during the simulation.
For the parameter settings, we set the initial reputation r 0 = 0.5 and α = 0.5 in Equation (1) for reasonable reputation bootstrapping and management. In addition, we set ǫ = 0.1 in Algorithm 2 as the truth discovery convergence threshold. Again, according to Rometrace, each round of simulation lasts for 86400 simulation seconds.
We collected the following four metrics to evaluate the impact of cheating behaviors 2 on the MCS performance: • Discovered Truth (DT) refers to the sensing truth discovered in a task, whose cumulative distribution reflects the sensing accuracy. Ideally, CRI should be able to prevent cheating behaviors from disrupting DT; • Reputation (REP) refers to the user reputation, whose cumulative distribution reflects the user's behavior in historical tasks. Ideally, CRI should be able to downgrade a cheater's REP in proportion to its cheating intensity; • Payback (PB) 3 refers to what a user receives for accomplishing sensing tasks, which reflects the motivation of the user participating in future tasks. Ideally, CRI should be able to reduce the PB that a user can get if it cheats; • Task Count (TC) refers to the number of sensing tasks accomplished by a user, which reflects the popularity of the user. Ideally, CRI should be able to limit the probability of a cheater participating in MCS tasks.
For comparison, we ran a round of simulation without any cheating behavior as the baseline (i.e. the no cheating scenario). Then, we analyzed the impact of cheating behaviors introduced by users with different properties. In following subsections, we depict the simulation results using either the Cumulative Distribution Figure (
B. Impact of General Cheating Intensity
In this set of simulations, to study the impact of general cheating behaviors with different intensities, we set all users in the MCS to introduce cheating behaviors with different The simulation result is illustrated in Figure 2. According to Figure 2(a), compared with the baseline scenario, the cumulative distribution of DT remains almost the same in all cheating scenarios. We can see that a disturbance 4 The setting of these cheating intensities should be reasonable considering the well accepted fact that the MCS is a relatively good community with a limited ratio of malicious behaviors (e.g. 4% in [17], or 10% in [18]). is introduced by general cheating behaviors with an intensity up to 20%. Such an impact is nearly negligible considering the practical temperature sensing requirement. CRI manages to effectively restrict the impact of general cheating behaviors on DT in both realistic and even harsher scenarios. According to Figure 2(b), (c), and (d), when the cheating intensity increases, user's reputation, payback, and task count are correspondingly downgraded for at least 1.64%, 3.44% and 2.20%, respectively. CRI manages to reduce the cheater's probability of being recruited in future tasks by reducing its reputation and payback autonomously, which will inherently restrict user's cheating intention.
C. Impact of Cheaters with Different Properties
In real-world MCSs, cheating behaviors of more trustworthy or active users may pose deeper impacts on the MCS's performance. In this set of simulations, depending on the simulation result of the baseline scenario, we separately set a user that (i) had the highest reputation (referred to as the TopR cheater), (ii) received the most paybacks (referred to as the TopP cheater), and (iii) accomplished the most tasks (referred to as the TopC cheater) to introduce cheating behaviors either consistently (i.e. cheat with a 100% probability) or intermittently (i.e. cheat with a 50% probability). The simulation results are illustrated in Figures 3, 4, and 5.
TopR Cheater: According to Figure 3, neither the consistent nor the intermittent cheating of the TopR cheater caused obvious impact on DT (introduced disturbances of 0.18% and 0.46% on the average DT, respectively). Nonetheless, in comparison with the baseline scenario, REP of the TopR cheater was downgraded as long as there was cheating behavior (1.04% and 92.71% lower, respectively). Correspondingly, both PB (1.19% and 99.98% less, respectively) and TC (8.33% and 33.33% less, respectively) of the TopR cheater decreased.
TopP Cheater: According to Figure 4, neither consistent nor intermittent cheating behaviors of the TopP cheater caused obvious impact on DT (introduced disturbances of 0.73% and 0.82% on the average DT, respectively). Meanwhile, REP of the TopP cheater (10.34% and 14.94% lower, respectively) was significantly downgraded. Similarly, it's PB (18.18% and 25.93% less, respectively) and TC (32% and 26% less, respectively) declined dramatically because of cheating.
TopC Cheater: According to Figure 5, DT was obviously affected by neither consistent nor intermittent cheating behaviors of the TopC cheater (introduced disturbances of 0.91% and 0.37% on the average DT, respectively). In turn, it's REP (58.51% and 42.55% lower, respectively) was severely downgraded in cheating scenarios. Also, PB (53.05% and 50.35% less, respectively) and TC (39.22% and 41.18% less, respectively) decreased significantly.
According to the results above, it is well demonstrated that CRI manages to encourage users to report honestly (with their best efforts) in sensing tasks for higher payback, reputation, and recruiting opportunities in practical MCSs.
V. CONCLUSION
In this paper, we developed CRI for MCSs to guarantee crowdsensing accuracy by encouraging mobile users to provide quality data without cheating for the maximum paybacks. To be specific, CRI enables MCS server to autonomously recruit as many as credible users as task employees, and employees will obtain the maximum payback only if they contribute honestly as their reputations indicate. Via theoretical analysis, we demonstrated the feasibility and correctness of our design. To evaluate the performance of CRI in practical MCSs, we conducted extensive simulations based on realworld crowdsensing data. The results show that CRI manages to guarantee the sensing accuracy under realistic cheating intensities (up to 20% of total reports). Meanwhile, cheating behaviors from users with selected advantages (i.e. higher reputations, more received paybacks, and more accomplished tasks) can be effectively resisted as well. Our future work is to develop a privacy-preserving CRI, which encourages honest user behaviors without jeopardizing sensitive user information including identities, locations, and living patterns. | 2017-01-08T09:00:31.000Z | 2017-01-08T00:00:00.000 | {
"year": 2017,
"sha1": "796f4b3198c8e6e11a73bbb8536866d8c8ab9f42",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.01928",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "796f4b3198c8e6e11a73bbb8536866d8c8ab9f42",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270919444 | pes2o/s2orc | v3-fos-license | Discovery of interspecific alloparental care of silver‐eared mesia nestlings by a mountain bulbul
Abstract Although brood parasitism has been well documented among bird species, interspecific alloparenting, which is parenting behavior of adult individuals of one species toward the progeny of another species, is increasingly being reported. However, compared with the many reports of interspecific alloparenting behavior in North America and Europe, this phenomenon is less well known in China, with only two prior cases of interspecific alloparenting behavior in birds having been recorded. On June 23, 2022, we observed an instance of interspecific alloparental care provided by a mountain bulbul (Ixos mcclellandii) towards silver‐eared mesia (Leiothrix argentauris) nestlings in Caihu Village, Jingdong County, Yunnan Province, southwestern China. We recorded 19.5 h of footage during the period in which the mountain bulbul provided care for the nestlings with the aim of documenting detailed observations of interspecific alloparenting to contribute to our overall understanding of this behavior. The alloparenting behavior of the mountain bulbul lasted for at least 5 days. During this period, both silver‐eared mesia parents fed their nestlings 157 times and removed their nestlings' fecal sacs 5 times, while the mountain bulbul fed the nestlings 30 times and removed the nestlings' fecal sacs 4 times. In addition, the male silver‐eared mesia parent chased the mountain bulbul away during nestling feeding. As there was no life history information for the mountain bulbul at that time, we were unable to directly determine why it exhibited interspecific alloparental care. Regardless of the reason for the mountain bulbul's behavior, these findings provide valuable information for future studies on the reproductive ecology of these two bird species.
Apart from interspecific brood parasitism, only 186 examples of interspecific alloparental care behavior have been reported, involving 107 species from 41 families in 10 orders, which represents less than 2% of the total number of birds worldwide (Harmáčková, 2021).
Furthermore, prior to the current study, only two cases of interspecific alloparental care had been reported in China (Jiang et al., 2016;Luo et al., 2018).Detailed records of interspecific alloparental care can contribute to our overall understanding of the evolution of this behavior, including whether it is limited to a few bird species or prevalent across avian species and whether it is restricted to a particular region or exhibited worldwide.Furthermore, as it is unclear why individual birds engage in parental care at nests that are not their own, investigating the taxonomic distribution and habitat associations of alloparenting may elucidate the underlying reasons for this behavior.
Therefore, in this study, we aimed to provide detailed observations of a mountain bulbul (Ixos mcclellandii) that engaged in interspecific alloparental care of silver-eared mesia (Leiothrix argentauris) nestlings in southwestern China.
| ME THODS
The study site was located in Caihu Village (24°17′37″ N, 100°35′22″ E), Jingdong County, Yunnan Province, southwestern China.The village has an elevation of 1767 m, the annual mean temperature is 15°C, and the annual precipitation is 1500 mm.The study site is located in the deep mountains of a sparsely populated area and is characterized mainly by natural secondary and artificial forests.
On June 15, 2022, we discovered a silver-eared mesia nest in Caihu Village.The silver-eared mesia nestlings were hatching, with two hatched nestlings and two eggs in the nest.To investigate the feeding behavior of silver-eared mesias, we used a GoPro HERO 9 action camera (GoPro, San Mateo, CA, USA) combined with a 10,000 mAh external power supply (TP-D094, Pisen, Guangdong Pinsheng Electronics Co., Ltd., China) to record the silver-eared mesia nest.
The recording equipment was tied to a tree 1.5 m from the nest.The silver-eared mesia nestlings fledged on June 27, 2022.Therefore, the recording period was from June 15 to 27, 2022.The daily recording duration was not fixed, starting as early as 05:00 and ending as late as 20:00 (GMT + 8).The shortest daily recording duration was 1 h and the longest daily recording duration was 8 h.As male and female silver-eared mesias differ in coloration (Zhao, 2001), we were able to differentiate between the parents and record their feeding events separately.Through the video recordings, we observed a mountain bulbul providing interspecific alloparental care to the silver-eared mesia nestlings.Mountain bulbuls typically build their nests on the lateral branches of tall trees or on shrubs and small trees in the understory, with nest heights ranging from 1.2 to 12 m above the ground level (Zhao, 2001).Bulbul nests are cup-shaped and primarily made of grass stems, leaves, and roots (Zhao, 2001).Despite a systematic search for mountain bulbul nests, we were unable to locate any active nests in the study area.Due to the similar appearance of male and female mountain bulbuls, we were unable to identify the sex of the individual that provided interspecific alloparental care to the mesia nestlings.
We imported the videos recorded using the GoPro into a computer (Thinkpad X1 Carbon, Lenovo, China) and used video playback to document the behaviors of the silver-eared mesias and mountain bulbul at the nest.We defined feeding behavior as an adult bird carrying food and delivering it to the mouths of the chicks, whereas nest-cleaning behavior was defined as an adult bird removing fecal sacs left by the chicks after feeding.
| RE SULTS
The recordings showed that the mountain bulbul first provided interspecific alloparental care to the silver-eared mesia nestlings on June 23.At this time, the four nestlings were 8 days old.As the recordings did not cover the entire day, it was unclear whether the mountain bulbul provided care to the nestlings before this date.
However, from June 23 to 27, the recordings showed that the mountain bulbul assisted in feeding the nestlings every day.Therefore, the mountain bulbul engaged in feeding behavior for at least 5 days, until the nestlings fledged on June 27.We recorded 48.3 h during the 13-day nestling period of the silver-eared mesia.During this time, the female and male silvereared mesias fed their nestlings 133 and 106 times, respectively (Video S1).We recorded 19.5 h during the 5 days that the mountain bulbul provided interspecific alloparental care.During this period, the female silver-eared mesia fed the nestlings 88 times, the male silver-eared mesia fed the nestlings 69 times, and the mountain bulbul fed the nestlings 30 times (Video S2).The food fed to the nestlings by the silver-eared mesia parents mainly consisted of Lepidoptera, Odonata, and Orthoptera adults and larvae (Figure 1).
In addition to feeding the silver-eared mesia nestlings insects, the mountain bulbul fed them red and black berries (Figure 2).The male and female silver-eared mesia parents exhibited nest-sanitation behavior and removed nestling fecal sacs after feeding.The mountain bulbul also provided interspecific alloparental care to the silvereared mesia nestlings by removing their fecal sacs.The mountain bulbul removed fecal sacs from the nest four times during the observation period (Figure 2).
In addition, the male silver-eared mesia was observed chasing the mountain bulbul away from the nest (Video S3).After feeding the nestlings, the mountain bulbul typically remained in the area around the nest, and when it observed that the male parent was near the nest, it immediately flew away.In one recorded interaction, the mountain bulbul did not fly away after feeding the nestlings and was attacked by the male silver-eared mesia.We did not observe the female silver-eared mesia rejecting the presence of the mountain bulbul.
| DISCUSS ION
In birds, interspecific parental care is often observed in the form of brood parasitism, wherein a bird lays its eggs in another bird's nest to induce it to raise the chicks as its own.However, feeding the offspring of other birds can also occur in other situations (Jiang et al., 2016;Konter, 2012;Levey & del Coro Arizmendi, 2021;Luo et al., 2018;Mann, 2017).For example, there are many records of interspecific alloparental care in songbirds (Fiss et al., 2016;Halley & Heckscher, 2013;Harmáčková, 2021;Krištín, 2009).In this study, we observed a mountain bulbul feeding silver-eared mesia nestlings for a period of 5 days beginning on June 23, 2022.Attempts to find a mountain bulbul nest near the silvereared mesia nest were unsuccessful.Therefore, owing to limited knowledge of the mountain bulbul's life history, the underlying factors driving interspecific alloparental care behaviors in mountain bulbuls remain uncertain.However, video evidence showed that the mountain bulbul fed the silver-eared mesia nestlings less frequently than their parents did.Additionally, there were differences between the male and female silver-eared mesia parents in their tolerance of the presence of the mountain bulbul near their nest.The male silver-eared mesia parent was observed attacking the mountain bulbul when remained near the nest after feeding the nestlings, whereas the female parent did not exhibit this behavior.
Regarding interspecific alloparental care, Shy (1982) listed eight possible reasons for this behavior: (1) the nest contained the progeny of two species (i.e., a mixed nest), which may occur when different species lay eggs in the same nest due to limitations in nesting sites (Barrientos et al., 2015); (2) the bird was physiologically in a parenting state but did not have a nest (e.g., nest was predated), which resulted in the transfer of parental care behaviors to a nearby nest (Haucke, 2015;Riedman, 1982;Skutch, 1999); (3) birds tend to be attracted by neighboring nests, and the nests of individuals who provided interspecific alloparental care were in close proximity to the nest of other birds (Jiang et al., 2016); (4) the individuals were stimulated by the loud begging calls of the nestlings from another species; (5) the progeny of one species were orphaned and adopted by individuals of another species; (6) the mate of a male bird was incubating eggs, so the male fed the nestlings of other birds; (7) the individuals were either unmated or did not have offspring of their own; and (8) other unspecified reasons.The possibility that the mountain bulbul was attracted to a neighboring silver-eared mesia nest was not supported by the evidence, as attempts to find a mountain bulbul nest within 100 m of the silver-eared mesia nest were unsuccessful.At the study site, the nesting preferences of mountain bulbuls and silver-eared mesias might overlap.It is also possible that the nest of the mountain bulbul was predated upon or destroyed, F I G U R E 1 Silver-eared mesia (Leiothrix argentauris) nestlings fed by both parents and an alloparent: (a) female and (b) male silver-eared mesia feeding moths to their nestlings; (c) first documented visit by the alloparent, a mountain bulbul (Ixos mcclellandii), to the nest; and (d) a mountain bulbul feeding the silver-eared mesia nestlings.
F I G U R E 2 (a) Silver-eared mesias and (b) the alloparent mountain bulbul removing the fecal sacs after feeding the silver-eared mesia nestlings.
although we found no evidence to support this.However, as these are common reasons for interspecific alloparental care behavior in other bird species (Haucke, 2015;Heber, 2013), one or more of these factors may explain the mountain bulbul's provision of interspecific alloparental care to the silver-eared mesia nestlings.In addition, it is possible that the mountain bulbul was stimulated by the vocalizations of the silver-eared mesia nestlings, compelling it to engage in alloparenting behavior (Shy, 1982).Notably, these reasons are not mutually exclusive, and more than one factor may have influenced the bulbul's behavior.Regardless of why the bulbul exhibited interspecific alloparental care, it was observed responding to the calls of the silver-eared mesia chicks.In the recordings, the silvereared mesia nestlings exhibited begging behavior towards birds near their nest, regardless of whether these birds were their parents or not.Consequently, the mountain bulbul was observed feeding the silver-eared mesia nestlings 30 times over five consecutive days.
This supports the likelihood that the nest of the mountain bulbul was predated on, leading the bulbul to transfer its parenting behavior to nestlings in a nearby nest.A previous study suggested that the desire to feed progeny may be a reason for interspecific alloparenting in mountain bulbuls (Harmáčková, 2021).Similar to the reasons listed by Shy (1982), this may occur in cases where the individual providing interspecific alloparental care is a mateless bird or a mated male bird whose mate is incubating eggs.It is possible that the mountain bulbul may not have had its own progeny to feed and therefore, fed the progeny of the silver-eared mesias.The mountain bulbul observed in this study showed parental care towards chicks of another species, suggesting that they cannot differentiate between their own chicks and those of different species, which is similar to the behavior of other birds (Peek et al., 1972;Turtumøygard & Slagsvold, 2010;Tyller et al., 2018).This failure to recognize differences in progeny may be one of the reasons for the evolution of interspecific brood parasitism in birds.For example, mountain bulbul nests have been observed to be parasitized by the common cuckoo (Cuculus canorus) (Lowther, 2017).
Although we were unable to definitively identify the reasons for the provision of parental care by the mountain bulbul towards the silver-eared mesia nestlings, we hypothesize that overlap in the breeding time and nest environment of mountain bulbuls and silvereared mesias, breeding failure in the mountain bulbul, and stimulation by vocalizations from the silver-eared mesia nestlings may have been the reasons why the mountain bulbul provided parental care to the silver-eared mesia nestlings.Regardless of the behavioral mechanisms underlying interspecific alloparenting behavior in mountain bulbuls, these findings provide valuable information for future studies on the reproductive ecology of these two bird species. | 2024-07-04T05:07:20.005Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "a80941f31ebd891acad67c9b9552f5c71a1172f1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a80941f31ebd891acad67c9b9552f5c71a1172f1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26331466 | pes2o/s2orc | v3-fos-license | Ultrastructural Analysis of in Vitro Adherence and Production of Acid Proteases by Clinical Isolates of Candida parapsilosis Sensu Stricto Following Growth in the Presence of Keratinous Substrates from Human Source
Candida parapsilosis is an increasingly important human pathogen. However, little is known about its potential to cause disease. The aims of the present study were to analyse the production of acid proteinases by clinical isolates of C. parapsilosis in the presence of different keratinous substrates from human sources (stratum corneum, nail and hair) and to verify the capability of yeast cells to adhere and grow as biofilm on these substrates. By scanning electron microscopy, it was observed that all C. parapsilosis sensu stricto isolates adhered to the keratinous substrates. For the isolate recovered from onychomycosis, the cell population attached to stratum corneum and hair keratin consisted mainly of blastoconidia. Differently, on nail keratin, pseudohyphae production was observed. Overall, there was a loose association between yeast cells and keratinous substrates. However, on stratum corneum, flocculent extracellular material was seen evolving cells from the onychomycosis isolate by forming a biofilm-like structure. The isolates recovered from onychomycosis and cutaneous lesion produced higher amount of acid proteinases in medium supplemented with nail keratin and stratum corneum keratin, respectively, than that in salt medium (absence of keratin). Furthermore, no differences were observed in the amount of acid proteinases produced by the isolate recovered from tracheal secretion in the media tested (absence and presence of keratin substrates). The information derived from this study will further our understanding of acid proteinase production by C. parapsilosis isolates and provide an insight into pathogenic mechanisms in C. parapsilosis particularly from isolates recovered from superficial mycoses.
Introduction
Candida parapsilosis is an opportunistic yeast pathogen that colonizes human skin and can spread nosocomially through hand carriage [1,2].Over the past decade, the incidence of C. parapsilosis has dramatically increased.The yeast can cause candidiasis that can vary from relatively mild skin mycoses to life-threatening systemic or disseminated disease (reviewed in van Asbeck et al. [3]).
Concerning superficial mycoses, C. parapsilosis has gained increasing recognition worldwide as the most common etiological agent causing Candida onychomycosis (reviewed in Trofa et al. [4]).In Brazil, C. parapsilosis is the first or second most common cause of onychomycosis lesions [5][6][7][8].For C. parapsilosis the colonized normal skin presumably serves as a reservoir of infection for the nails.Recently, we showed the capability of C. parapsilosis isolates exhibiting distinct phenotypes to grow as biofilm on human nail surfaces [9].
Several virulence factors of C. parapsilosis have been proposed, including adhesion, biofilm formation and secretion of hydrolases such as secreted aspartic proteinases (Saps) (reviewed in Trofa et al. [4]).We previ-ously reported the production of proteinases and haemolytic factor by isolates of C. parapsilosis obtained from distinct clinical sources [10].More recently, we have demonstrated that C. parapsilosis sensu stricto secretes a hemolytic factor into culture medium [11].
However, in contrast to the species Candida albicans, virulence traits of C. parapsilosis have not been extensively studied.Few studies have been undertaken to evaluate the adhesion ability of clinical strains of C. parapsilosis [9,[12][13][14].Furthermore, the relationship between C. parapsilosis virulence and proteinase phenotype is still unclear.
In this study, we investigated for the first time the in vitro adherence pattern and production of acid proteases by clinical isolates of C. parapsilosis in the presence of keratinous substrates from human source, e.g., stratum corneum, nail and hair.
Candida Isolates and Identification
Isolates of C. parapsilosis sensu stricto included in this study were recovered from fingernail onychomycosis (isolate 150.06), cutaneous candidiasis (isolate 220.07) and tracheal secretion (isolate 205.06) [15].The identity of isolates was determined by the PCR technique as described by França et al. [10] using C. parapsilosis (formerly C. parapsilosis group I) specific primers for URA3 gene (orotidine-5'-phosphate decarboxylase) [16] that allows to distinguish C. parapsilosis sensu stricto from the cryptospecies belonging to the C. parapsilosis complex.
Preparation of Keratin Substrates
For the substrate stratum corneum, fragments of human sole from healthy volunteer were prepared as described previously [17] with modifications.The fragments were soaked in ethanol for 96 h following washes with sterilized distillated water until a clean solution was obtained.The fragments were dried at 50˚C, grounded to a powder in liquid nitrogen and dried again at 50˚C.
The substrates nail and hair were prepared as described previously [18] with modifications.Human's hair and nail from healthy volunteer were cut to obtain small fragments ranging from 0.5 to 1 cm of size.The fragments were defatted by soaking for 4 d in chloroformmethanol 1:1 (v/v).The solvent was changed once a day.The fragments were then thoroughly washed with sterilized distillated water and dried for 3 d at 50˚C.The nails fragments were grounded to a powder in presence of liquid nitrogen and dried again at 50˚C.The prepared substrates were autoclaved for 5 min at 115˚C and then added to previously sterilized basal minimal medium.
Proteinase Assay
Proteolytic activity was determined as previously described [20] with modifications, using haemoglobin (Sigma; St. Louis, MO) as substrate.Each assay included 50 l of culture supernatant and 0.2 ml of 20 mM citrate buffer, pH 4.0, containing haemoglobin (0.5 mg/ml).After incubation at 37˚C for 2 h, the reaction was stopped with trichloroacetic acid (TCA) 5% on ice, following incubation at 4˚C for 1 h.The mixture was centrifuged at 4000 g for 10 min at 4˚C.After this, three aliquots (150 μl each) of the reaction mixture were transferred to wells on a microtiter plate containing 100 μl of a Coomassie solution (0.025% Coomassie brilliant blue G-250, 11.75% ethanol, and 21.25% phosphoric acid).After 10 min to allow dye binding, the plate was read on an Asys HiTech UVM 340 microplate reader at an absorbance of 595 nm.Protease activity was calculated based on the absorbance difference between samples and controls.The control samples were added supernatants that were immediately treated with TCA.One unit (U) of proteolytic activity was defined as the amount of enzyme that caused an increase of 0.001 in absorbance unit, under standard assay conditions.The proteolytic activity is expressed as U/ml.
Scanning Electron Microscopy
To verify the adhesion pattern of yeast cells with human keratin substrates, samples were fixed in 2.5% glutaral-dehyde (Electron Microscopy Sciences) in 0. Adherence is essential for members of the genus Candida to develop their pathogenic potential since it triggers the process that leads to colonization and allows their persistence in the host.For instance, more pathogenic Candida isolates showed higher adherence capacity on human oral and epithelial cells [21].According to these authors, epithelial cell variability played a critical role in the adherence phenomenon [21].
Statistical Analysis
In our study, we analysed the in vitro pattern of adherence of C. parapsilosis sensu stricto cells to human keratinised substrates, i.e., on soft keratin (cutaneous stratum corneum-the outermost layer of skin) and hard keratin (nail and hair).By scanning electron microscopy it was observed that all C. parapsilosis isolates adhered to the keratinous substrates (Figure 1).On hair fragments, the adherent cells are seen only within the follicle cortex and the number of cells adhered to this substrate varied among isolates (Figure 1(c)).
All the experiments were repeated three times and assay was performed in duplicate.Tukey test was used to determine statistical significance.P < 0.05 was considered statistically significant.
In Vitro Adherence Pattern of C. parapsilsois Cells to Human Keratinous Substrates
Yeast pathogenicity arises through complex interactions SEM analysis revealed that the onychomycoses isolate presented different morphological pattern according to the substrate that they were in contact.For instance, cells adhered to stratum corneum and hair keratin consisted mainly of cells in the budding-yeast phase of growth (blastoconidia) (Figures 1(a1) and (c1)).Differently, on nail keratin pseudohyphae production was observed (Figure 1(b1)), a pattern that could indicate that this situation favours cellular morphologies with capacity for tissue invasion.This data extend our previous observation that different profiles of biofilm formation by C. parapsilosis occurred as function of the keratinous substrate [9].
For the other isolates the cellular population consisted of blastoconidia and pseudohyphae on all substrates analysed, with exception of the isolate 205.06 (tracheal secretion) on hair keratin (Figure 1(c3)).Intraspecific differences in adherence to polystyrene have been described among clinical isolates of C. parapsilosis obtained from distinct body sites [12].More recently, reconstituted human epithelium (RHE) has been used to study in vitro colonization by C. parapsilosis complex [13,14].According to these authors, the extent of surface colonization on RHE by C. parapsilosis was strain dependent.
Overall, there was a loose association between yeast cells and keratinous substrates.However, on stratum corneum flocculent extracellular material was seen evolving cells from the onychomycoses isolate by forming a biofilm-like structure (Figure 1(a1)).This feature was not observed on the other two sources of human keratin (nail and hair).
It has been showed that adhesion [22] and ability to grow as biofilms [23] on abiotic surfaces are especially important for outbreaks of C. parapsilosis infections (reviewed in Trofa et al. [4]).This is the first report of ultrastructural features related to adhesion of C. parapsilosis isolates associated with skin and nail infections to distinct keratinised substrates from human source.
Acid Proteinases Production
In this study, we evaluated for the first time the growth profile and the production of acid proteinases by C. parapsilosis using human keratin as sole source of nitrogen.The isolates tested presented similar trend of growth rate in keratin-supplemented media.However, the fungal growths were more profusely in stratum corneum-supplemented medium than in hair-and nail-supplemented medium (cell density reached 10 8 cells/ml and 10 7 cells/ml, respectively), probably due to differences in keratin structure and the degree of cross-linkages by disulfide and hydrogen bonds.
The results obtained (Figure 2) showed that C. parapsilosis sensu stricto isolates produced proteinases in all tested media.The isolate recovered from onychomycosis produced higher amount of acid proteinases (P < 0.05) in medium supplemented with nail keratin (440 U/ml) than in salt medium (absence of keratin) (120 U/ml).No differences (P > 0.05) were observed on proteinase production in the presence of the other two sources of keratin (stratum corneum and hair).For the isolate 220.07 (cutaneous lesion) the production of acid proteinases was higher in medium supplemented with stratum corneum keratin (440 U/ml) than in salt medium (120 U/ml) (Figure 2).These data suggest that the source of keratin seems to be correlated to the induction of acid proteinases in an isolate dependent manner.
Differently, no differences were observed in the amount of acid proteinases produced by the isolate recovered from tracheal secretion in the media tested (absence and presence of keratin substrates) (Figure 2).Furthermore, when compared proteinase production by Candida isolates after growth in the same source of keratin, e.g., in nail keratin medium, the isolate recovered from onychomycosis exhibited higher proteinase activity (P < 0.05) than isolate obtained from tracheal secretion, suggesting that the potential of C. parapsilosis nail isolate to cause onychomycoses may be associated with acid proteinase production.
It has been reported that the expression of genes en-Open Access AiM coding aspartic proteinases (Saps) varied among different clinical isolates of C. parapsilosis complex when grown in contact with human oral epithelium [14].According to these authors there is a trend relating Sap production and site of isolates recovering.
In C. albicans, the most discussed hydrolytic enzymes are secreted aspartic proteinases (Saps), which are one of the well-known virulence factors of this species (reviewed in Schaller et al. [24]).According to Monod and Borg-von [25] Saps play a role in fungal adherence and invasion of skin by C. albicans.The occurrence of Saps has been previously demonstrated in C. parapsilosis isolates obtained from distinct clinical samples [10,[26][27][28] and it has being suggested that Saps are associated in superficial, but not with systemic invasion, caused by C. parapsilosis [26].They observed the induction of Saps from C. parapsilosis cultivated in media containing bovine serum albumin (BSA) as a nitrogen source.Considering the clinical point view, it is questionable whether these enzymes could have the required function of digesting keratinised tissue parasitism.Recently we showed that the production of Saps, in BSA inducing medium, by C. parapsilosis isolates obtained from nail and skin was less expressive compared to isolates obtained from blood and tracheal secretion [10].Although it has been established that C. parapsilosis is an opportunistic pathogen related to the skin surface, and is also emerging as an important cause of onychomycosis, as far we know, in the present study we describe for the first time the production of acid proteases by isolates of C. parapsilosis recovered from superficial mycoses in the presence of keratinous substrates obtained from human sources.Nevertheless, the properties of the individual proteins that presumably account for the virulence of C. parapsilosis have not yet been elucidated to date.
Conclusion
In conclusion, ultrastructural investigations of the interface of C. parapsilosis and the keratinised substrates from human source reveal important features, which may help to clarify the pathogenesis of superficial candidiasis.Our findings indicate the need for investigation of a possible involvement of acid proteinase in the onychomycosis and cutaneous lesion due to this species.
Figure 2 .
Figure 2. Measurement of secreted acid proteinase activity on haemoglobin in clinical isolates of Candida parapsilosis obtained from onychomycosis (150.06),cutaneous candidiasis (220.07) and tracheal secretion (205.06).After growth in basal minimal medium (MM) and MM supplemented with stratum corneum (MM + SC), MM supplemented with hair (M + H) and MM supplemented with nail (MM + N) for 10 d at 37˚C, the cultures were harvested and the spent culture media were then tested to degrade soluble haemoglobin.The proteolytic activity was determined as described in Material and methods and is reported as arbitrary units (U/ml).Standard errors of the means for three measurements are presented as bars.* P < 0.05 for MM + N vs MM (isolate 150.06) and MM + SC vs MM (isolate 220.07). P < 0.05 for isolate 150.06 vs 205.06 (MM + N medium).
the organism's virulence characteristics and the host's response.Compared with C. albicans, little is specifically known regarding virulence factors in C. parapsilosis sensu stricto. between | 2017-10-21T04:07:31.679Z | 2013-12-13T00:00:00.000 | {
"year": 2013,
"sha1": "abe4fd034a28e7099746cdd7d04cd00eec02716c",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=40728",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "abe4fd034a28e7099746cdd7d04cd00eec02716c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
67805826 | pes2o/s2orc | v3-fos-license | Indebtedness and Comfort : The Undercurrents of Mizuko h y 0 in Contemporary Japan
It is difficult to recall exactly when weekly magazines and T.V. programs began to take up and sensationalize the theme of a curse associated with mizuko 7k + , that is, aborted or stillborn children. It is manifestly clear that the pain and sorrow of losing a child is most directly realized by the would-be mother. Undeniably there are many people who, due to various circumstances, could not avoid losing their child through an abortion. To prey on people's suffering and weakness by advertizing an association of a "curse" with these naimko, while being fully aware of such circumstances, strikes one as a pernicious business. Nevertheless there are a large number of people who respond to these solicitations. Why are these memorials or offerings for mimko (mimko kuyb 7k F 4% % ) so popular in Japan today, and what is it about them which appeals to contemporary Japanese? There are some who believe that the large number of abortions carried out in Japan today, and the increasing demand for nlizuko kuyb, are due to the deluge of sexual information and the shameless profusion of the so-called sex industry, and that abortion is most common among the younger population. However, as we shall see in more detail later, the available data does not support this theory. The roots of this problem are too deep to be explained simply by the sexual habits of 'the younger generation. Instead this trend is closely connected to the religious views of the Japanese people, especially their concept of the spirits of the dead (reikonkan 2% 3 @ ), and in this sense mizuko kuyb is an important topic for those interested in Japanese religion. In this article we will examine mizuko kuyb and the related subject of memorials for pets (petto kuyb) from the perspective of religious studies. Since the subject encompasses a number of sensitive issues, we will first take
a step back and examine the background of this situation.
The Japanese Concept ofthe Spirits of the Dead As we have already mentioned, mimko kuyb is closely connected to the Japanese concept of the spirits of the dead. While we cannot fully explicate this topic here, it is the accepted opinion of most scholars that the animistic nature of Japanese religion is an important factor in this concept. We do not intend to discuss here whether animism is the belief which attributes a soul or spirit even to inorganic entities, as defined by the 19th century anthropologist Tylor (1958, Chapter lo), or the belief which attributes some more general "power," including some soul or spirit, even to inorganic entities. Here we are referring to an animism in the later broader sense of attributing an ambiguous "power" to certain objects.
Traditional Eastern religions such as Hinduism in India or MahZy2na Buddhism, unlike the monotheism of Christianity or Islam, have an emphatic "animistic" coloring, with strong polytheistic or pantheistic tendencies. The particularly polytheistic Japanese myths are well known, but in this article we are interested more in the relation of mimko kuyb to Japanese Buddhism. On this point Nakamura Hajime says: For the Japanese of old, even the grasses and trees had a spirit (seishin % t+ ), and it was generally believed that these inanimate objects could become enlightened and thus be saved. In other words, the idea that even "non-sentient" (hijb 9E' R ) objects could attain Buddhahood, based on the Tendai concept of the ultimate reality of all existences (shohb jissb i % ,& % @I), was very strong in Japan.
. . . The idea of attributing a spirit to even grasses and trees can be found in Indian Buddhism. . . . However, according to many Indian philosophies, living beings attain liberation through knowledge (vidya), and the idea that grasses and trees could attain enlightenment was not developed (Nakamura 1961, p. 18).
This idea was traditionally expressed through such phrases as "the grasses, trees, and land all without exception attain Buddhahood" (sbmoku kokudo shikkai jbbutsu k , % @ Ji k {A) and "the mountains and rivers and grasses and trees all have the buddha nature" (sansen sbmoku shitsu'u busshb th ) I I 7 k , % {A ' kk ). Even non-sentient entities such as grasses and trees or mountains and rivers can attain Buddhahood; how much more so can sentient beings. This is the ultimate expression of the Buddhist belief that "all sentient beings have the buddha nature." Of course this "buddha nature" is not, Buddhologically speaking, equivalent to the spirit which is pacified through mizuko kuyii, because at least this spirit is the spirit of the dead. However, it is very doubtful that this academic distinction was regarded as important by the general populace.
The fact that the framework for allowing this sort of development was provided by Buddhism does not mean that Buddhism generally handled the memorials for the spirits of the dead as in the present from an early historical period. It is well known that from the time before the introduction of Buddhism the Japanese felt an intense aversion toward corpses or anything associated with death, and that there were many taboos associated with death. Death was a central item in the list of defilements. Neither Shinto nor other elements of folk religion would deal with the bodies of the dead, whether they were those of human beings or animals, and even today anything associated with death is taboo to a Shinto shrine. No one in mourning can participate in a shrine ceremony.
In the case of contemporary Japan, one can get the impression that Buddhism has acquired exclusive rights for handling matters associated with the dead. Some may think that this was always the case for Buddhism in Japan, but this opinion is mistaken. It is now believed that the common people in Japan around the 7th century did not bury their dead in graves but rather discarded corpses by the side of the road (Shiiode 1972, p. 14). During the mid-Heian period, a story is told of Fujiwara no Tadahira (880-949), the head of the northern branch (Itokke ?k 2Z ) of the Fujiwara family, who once visited his father Mototsune's (826-891) grave and mentioned that the graves of his grandfather-in-law Yoshifusa (804-872) and his ancestor Uchi-mar0 were also nearby, but that he himself was not sure exactly where they were. At the time Fujiwara Tadahira was at the peak of his powers, one who held posts such as the daja daijirt k Ek k E and kanpaku l%l EI . Even such a man as this was not sure of the location of the family graves. It is clear that even members of high society like him were not particularly zealous in honoring their ancestors (Takatori 1969, p. 29).
It was not until the late Kamakura and Muromachi periods (13th-14th centuries) that Buddhism begin to play a role among the common people in providing funeral and memorial services. According to Takeda, most of the ordinary temples in Japan were either founded or restored after A.D. 1501. Even those which claim to have been "restored" have no clear records before this time, and it is safe to say that there were no permanent resident priests in these temples before the time of their "restoration" (Takeda 1971, Chapters 1-4). If it is truly the case that Buddhist priests did not settle down in village temples around Japan until the 16th century, it must have been from this time that Japanese in general began to perform funeral services, and to diligently observe memorial rites for their ancestors. It was on this basis that the Edo shogunate found it easy to establish the family temple system (terauke seido +!?3 %U E ). The relationship between Buddhism and the general populace, supported by the central pillar of funeral services and ancestral rites has thus continued to this day, despite anti-Buddhist government policies such as the forceful separation of Shinto and Buddhist elements (shinbutsu bunri $9 .?A f i ) after the Meiji Restoration (late 19th century).
The Meiji constitution, based on the patrilineal system, has passed into history, and there was talk for a whiie after the Pacific War that under the new democratic constitution such ancestral rites would disappear, taking with them the Buddhism which depends on such activity to survive. However, this prediction has not materialized, and even today there is no sign that the Japanese concern for their ancestral spirits has diminished.
Spirits and Their Memorials
Buddhism in Japan today is so closely affiliated with ancestral rites that it is often called "funerary Buddhism" (siishiki bukkyii % % 4L $2). That does not mean that all spirits of the dead are treated in the same way. Two good examples are the spirits of the mizuko and the spirits of dead children. Mizuko refers to children who die shortly after birth and fetuses who are stillborn (including both natural miscarriages and "artificial" abortions). "Children" refers to those of seven years of age or younger.
The spirits of the ntizuko and dead children were, in traditional Japanese society, not treated the same as the spirits of dead adults. Chiba and Otsu have written extensively about the differences in funerary rites for these different subjects. It was not only the funerary rites which were different. In general, graves were not built for ntizuko or children and there were no memorial services held for them in particular. In some parts of the country, however, there were separate graveyards for children called kobaka ZI ) ; .
One can also find such customs as the placing of sardines or some other fish in the mouth or in the casket of the ntizuko or child before burial. According to Chiba and dtsu, however, the purpose of this practice was to prevent the mizuko or child from "attaining Buddhahood" after death and to allow them to be reborn in this world (1983, pp. 20-24, pp. 137-142).
In any case, it is undeniable that throughout Japan the spirits of dead children and mizuko were handled differently than the spirits of adults. One example of this point is the practice of having the village nenbutsu association (nenbutsu kii & 4L % ) perform a funeral service for a dead child rather than calling a Buddhist priest from the temple. It is also not unusual to place the "ancestral" tablets of the dead children and ntizuko under rather than on the shelves of the shbryiidana 8 %&I during o-bon B & . The offerings to these spirits are also placed under rather than on this ancestral shelf.
We have mentioned above how in Japanese Buddhism not only sentient beings but also non-sentient beings possess the buddha nature. The custom of offering memorials for inanimate objects such as dolls (ningyii kuyii A 7E $4 ?& ) or needles (Itari kuyii 8 $4 23 ), or for victims of one's profession such as eels (unagi kuyb % ! $4 %) or whales (kujira kuyb E $4 S ) reveals the Japanese belief that such beings, whether animal or inanimate, have some sort of "spirit" (reikon 3 G% ) or "soul" (tanzasltii k 3 L b ). Chart I is an attempt to categorize "spirits" on the basis of such memorial rites (or, more specifically, as the objects of memorial rites performed under the aegis of Japanese Buddhism). world to perform their ancestral rites, are the normal, most common type of human spirit. Having lived a full life, this spirit has descendants who after one's death will perform the proper rites. It will thus become an ancestral spirit who protects its descendants. As Yanagita Kunio has shown in his work on Senzo no hanaslti % $33 0 8 (Tales of the ancestors, 1973, pp. 1-152), this is the "normal" ancestral spirit of the Japanese.
The muenrei %@ 3 (spirits with no relations) are the opposite of the uenrei. They have no direct descendants to perform rites for them. It was believed that as a result these spirits could become vindictive (ony3 %4 % ) and bring about misfortune to those still living in this world. However, as we mentioned above, the muenrei of those who died as adults and those of children and mizuko were traditionally handled differently.
The spirits of animals were traditionally handled through special memorial ceremonies for animals. In such cases, the animals being memorialized were invariably those which had performed some useful service for human beings. Examples would include the memorials for cows and horses by farmers or horse traders; memorials for fish, porpoises, or whales by fishermen or whalers; or mounds (senbikizuka 7 @ ) built by professional hunters in honor of their game. Each of these involved certain ceremonies to be performed by a Buddhist priest, and often included the establishment of a memorial tower or mound.
Companies and research facilities use a large number of guinea pigs and other small animals for scientific experiments. Many companies perform a regular memorial ceremony, usually once a year, in honor of these animals. There are many towers built as memorials to animals in the famous graveyard on Mt. Kbya, the headquarters of the Shingon school.
In contrast to the longer tradition of memorializing all these animals, memorial services for pets is a recent phenomenon. These pets do not perform a useful service to their owners in the traditional sense. They are not work animals. In the past, dogs performed various services, such as that of a watch-dog, and cats were kept to catch mice, and thus to a certain degree were "economic animals." Modern pets, however, play a different role. The recent use of the English loan word "petto" for pets signifies the change in status for these animals from a traditional, economically useful role, to that of a "humanized" role as a member of the family. In fact, "petto" has become a Japanese word and the traditional word for pet in Japanese is already obsolete.
Memorial services for fish are commonly performed by fishermen. A fishing village almost certainly has a stone memorial tower or monument dedicated to the fish which have been captured and kiied. The more traditional Buddhist ceremony of releasing captured birds or fish (haja-e fiP LkG ) is closely related to this practice.
The memorial tower for white ants on Mt. Kbya is the most famous example of memorials for insects or other bugs. This tower was built by a major company involved in the extermination of white ants. For most people these white ants are harmful vermin, but for exterminators it is possible to say that they are a source of business and thus indirectly beneficial entities. Memorial services for needles and dolls have been performed at places such as Sensb-ji & @ 3 since the Edo period. This type of memorial, such as the one for needles, is often sponsored by those involved in the production of such materials or by professionals who use them in their work.
Recent Trends
As we mentioned in the introduction, the topic of niizuko kuyb and the curse associated with rnizuko is brought up almost weekly in the mass media. Advertisements by temples performing ntizuko h y b in newspapers and weekly magazines, and those distributed through the mail, reach a peak around the higan periods of the spring and fall equinox, when the Japanese traditionally visit the ancestral graves. The practice of mizuko kuy5 is best represented by the building of mizuko Jizb (Skt. Qitigarbha) statues. The large numbers and energetic activity of those involved in using Jizb include temples specializing in mizuko kuy6 such as the Shiunzan Jizb-ji, the numerous temples of various Buddhist sects which have traditionally performed mizuko kuyb as one of many activities, and the Benten-shO #F X Z which encourages performing mizuko kuyb by building an enormous Jizb tower. Recently more and more Buddhist temples are building new ntizuko Jizb statues and encouraging mizuko kuyb. There are also many new religions which, though they do not adopt the form of building mizuko Jizb statues, offer comfort from and nullify the guilt and curse associated with the spirit of a mizuko.
In light of this religious situation, one has reason to suspect that it is not merely the promiscuous sexual habits of the younger generation leading to increased abortions which accounts for the recent popularity of mizuko kuyb. There is no doubt that rnizuko kuy6 is closely related to the Japanese concept of spirits. We will now clarify the contemporary situation with regard to mizuko kuyb by comparing it to the infanticide (mabiki FH~ 5 I 3 ) and abortions practiced during the Edo period (16th-19th centuries).
The first point that must be clarified concerns the subject who practices mizuko kup. Let us take a look at the statistics on abortion compiled by the government under the Eugenics Protection Act @ % %Z% %k (Charts I1 and 111). These statistics are based on official reports submitted by doctors, and it is believed that the actual number of abortions may be close to twice the reported figures. We are not concerned here with the exact totals, but rather with the percentages and trends according to age groups. Also, it is safe to assume that the age group which experiences the most abortions would be most active in performing mizuko kuyb.
Chart I1 shows the total number abortions according to age group, and Chart I11 shows the percentage of total abortions according to the same age groups. One striking point is that although the general trend for the past thirty years is for the number of abortions to decline, the trend among women 30-35 years of age has reversed and the number of abortions by women in this age group has increased steadily since 1977. Abortions had decreased steadily and were about the same number as for those 25-29 years of age, but since 1978 the two groups have headed in opposite directions and now women between the ages of 30-34 have a much larger number of abortions. As for the theory that abortions have increased among young people due to their sexual promiscuity, we can see that the number of abortions by those under twenty years of age has indeed shown a steady increase and topped 20,000 for the first time in 1981. However, this still accounts for only 3.7% of the total number of abortions, and other than the worrisome fact that the numbers are increasing steadily, it cannot be considered the major group involved in having abortions. In fact it is the age group of women around thirty years of age (25-34), the majority of which are probably married women with children, which forms the dominant group of people who have abortions. In 1981 they had 51.8% of the total number of abortions in Japan.
Abortions in Traditional and Contemporary Japan
We have seen that the majority of abortions in Japan are performed on married women. The same could be said of infanticide and abortions during the Edo period. Let us then analyze the differences between the situation of married women in traditional society and those in modern society whose basic unit is the nuclear family: 1. Pregnancy. In traditional society one lived in a closed, communal society. The fact of pregnancy soon became common public knowledge. In contrast, in the contemporary situation pregnancy is an individual or family affair and not a matter of public concern.
2. Performance of abortion. In traditional society abortion was performed with what now seems very crude and unreliable methods, often placing the mother in danger of losing her l i e (see Onshizaidan Aiikukai 1975). Today, under the Eugenics Protection Act, anyone can freely choose to have a safe abortion. The medical techniques have advanced to the point where it is not even necessary to be hospitalized. Therefore, the most important point is that it is possible to keep the fact secret. calamity or as an unavoidable means of survival.
In modern times abortion is a result of "a strong emphasis on birth control or a general heightening of a preference for fewer children" (Muramatsu 1983, p. 14). Brooks writes of "those persons who seem to abort their children because of self-centered materialistic aspirations" (1981, p. 133). It is a fact that there is no poverty in Japan today (compared with that of the past), and even though people have more than enough of the necessities of life, they constantly seek to improve their economic condition. Since any more than the minimum number of children would hinder this quest, it is commonly believed that many people immediately choose to have an abortion as soon as pregnancy is discovered. In this sense, contemporary abortions are performed for individual reasons, and each must be justified by individuals.
4. Responsibility for abortions. It is clear that responsibility for abortions in traditional society was not merely that of the individual. Responsibility was shared by the community in general, or there may not have been any sense of responsibility at all. In contemporary Japan the responsibility must be borne in secret completely by the individual. This is the basis for the feeling of indebtedness (fu no seishinsei & 0 % @ 'B ) in the title of this article, which w i l l be discussed in more detail later.
The concept of the spirits of children.
According to Chiba and Otsu, in traditional society there were definite differences in funerary practices between those performed for adults and those for children. Since "the life of a newborn child was sent into this world from the spiritual realm of the kami (shinrei @ 5% 1' ' (1983, p. 37), a Buddhist funeral was denied children, but prayers were added by relatives that they would be reborn in this world. Chiba and dtsu extrapolate the idea that "since the spirit is something which is given by the kami, it can be returned in case it is not needed at that time, and received again when it is required" (1983, pp. 141-142). In contemporary Japan, however, there is no belief in a special or different kind of spirit belonging to children. At the risk of being misunderstood, it could be said that children have now been included in the same group as that of adults. The next point is related to this one.
6. Naming of children. In traditional society, as mentioned above, children who died under the age of seven were not buried in a grave, and thus lost any further direct connection with the village or temple. This indicates that these children had not yet become full-fledged members of the group or society (Chiba and Otsu 1983, p. 167), and that according to the concept of spirits expressed in section 5, they would have another chance to be reborn and join that society. The particular characteristic of these children, including aborted and stillborn children, was their namelessness. In contemporary Japan, on the other hand, children (including stillborn children) are buried with the same formal funeral ceremonies as adults, even though these are admittedly more toned down than a funeral for an adult. Usually posthumous Buddhist names (kaimyb fFti $5 or hbmyii f& $4 ) are given. A problem arises with aborted children. In general, funeral ceremonies are not performed for them and they are not given posthumous Buddhist names. However, the namelessness of mimko is based on the fact that the abortion was carried out in secret, hidden from society in general. The fact still remains that for the person who aborted the child a life had been harbored in her body and that this life was rejected by her. The memory remains and cannot be easily dismissed. From the perspective of the Japanese concept of life and spirits, the aborted child in contemporary society should have a name.
Abortion and Religion
In light of the above analysis, there are clearly at least two definite differences between the two societies. These are First, the differences in social system, and second, in the concept of spirits with regard to children. Both of these differences are well illustrated by the attitude taken by these two kinds of societies toward abortion.
The traditional, local community, with its strong interpersonal relationships and restrictive local mores, had a concept of spirits which distinguished between those of adults and those of children. Therefore infanticide and abortion could be justified to some extent among the community as a whole. To that extent the responsibility was also shared by the community, and an individual did not have to bear the burden alone. In addition, the belief that the chid could be reborn spared the individual and community from suffering guilt for having performed infanticide or abortion. There is also the fact that the life expectancy of children in general was very uncertain at this time, and the death of a child was easier to accept.
Contemporary society, on the other hand, has evolved more and more, along with increasing industrialization and urbanization, toward being centered around small families and the nuclear family. Now abortions are carried out in a milieu wherein the spirits of adults and children are not distinguished, and so the aborted child is considered a living entity entitled to life. In such a society the individual seeks to have an abortion alone and in secret, and thus must also bear the responsibility alone and in secret. It is this situation in which a conscientious person will suffer self-recrimination and a feeling of indebtedness. As Ono has written, "Instead of the offering of a memorial for all spirits (in the traditional local community) in which the unhappiness and suffering of others, and one's own distress, is shared, could it not be said that the popularity of mimko kuyb is an individualization of suffering?" (1982, p. 25) Given this structure of contemporary society, the causes for the popularity of mizuko kuyb become clear. As Brooks has pointed out, behind this increasing popularity are the "conflicting feelings" of those who undergo abortions. On the one hand there is the feeling that abortion goes against the principle of respect for life. On the other hand is the belief that spirits of mizuko who are not memorialized are potentially dangerous, i.e. there is a fear of suffering from a "curse" (Brooks 1981, pp. 133-137). Abortion in contemporary Japan is not unavoidable or necessitated by natural calamity, but is carried out by individual will in the midst of material prosperity, against a child whose probability of dying before reaching maturity is otherwise extremely low, and who has become "humanized" in contemporary society. This results in a feeling of indebtedness and self-recrimination and a search for a cause and effect relationship which finds the reason for one's happiness or unhappiness in the fact of an abortioa2 In addition, anxiety of a possible curse from an aborted baby, based on this feeling of indebtedness, comes from the aforementioned development that children are now considered as being in the same category as adults (the "humanization" of children). In traditional society the spirits of the children were not considered as possible purveyors of a curse, whereas in contemporary society the spirits of children are treated the same as the spirits of adults, and thus have the potential for casting a curse. We can thus say that contemporary mizuko kuyb has the purpose of providing comfort from the feeling of indebtedness and anxiety which comes from a fear of this curse. Examples to illustrate this point are easily found, such as the writings of Hashimoto Tetsuma of the Shiunzan Jozb-ji on the transformation of such curses to worldly benefits through performing memorial services, notes such as omoidegusa ?S h I % (unpublished reminisces) by pilgrims to Jikishi-an in Kyoto, or the pamphlet Yasuragi 3 ' -?' 6 $' (Comfort) published by the Benten-shii. For one concrete example, let us look at some of the unpublished notes known as omoidegusa: After having one miscarriage and aborting one child, we were blessed with two children-one boy and one girl-and I am now living happily with both my husband and aunt.
However, I feel heartbroken when I think of that child (which I aborted) and think of what it would be like if the child was alive.
I am now thirty-six years old, and I think that many different experiences, though some are very sorrowful and painful, help one to grow and mature.
I also had a romantic relationship [when I was young] which ended with me crying the night away, but this experience has made me stronger. I want to tell young people to face life resolutely, and not be Brooks, refemng to Nakamura (1967, p. 143), attributes this to the "non-rational or nonscientific habits.. . among many Japanese" (1981, p. 134). disheartened. I look forward to continuing self-improvement.
Finally, a few words on mizuko kuy6 and the practice of religion in general in contemporary Japan. Morioka categorizes the relationships between contemporary people and religious institutions as follows: 1. Temporary (icltijiteki -@ t%l ) relationship. A relationship which does not continue steadily for a long period, but is a temporary relationship based on a short-term need.
2. Surface (hybmenteki S E i @I) relationship. A merely outward or superficial relationship which does not penetrate to the deepest dimension of the personality.
4. Liberated (kaihdteki E8 3k @I ) relationship. A relationship which is not restrictive but is liberated from the narrow confines of the traditional family temple relationship (1981, p. 93). These relationships to religious institutions that Morioka points out as indicative of a nuclear-family based society in this age of urbanization are very pertinent to mizuko kuyii. For example, mizuko ku)b is in almost all cases a temporary relationship. There is no formal funeral, and any follow-up services are done at the discretion and convenience of the individual. It is a superficial, surface relationship, because it ends as soon as one is set free from any possible curse and is thus comforted. It is a mutually "beneficial" relationship for the same reason. It is a "liberated" relationship because in most cases the person seeking to offer a memorial service does not go to the family temple but to a place with which one has little or no previous connections. We can conclude that mizuko kuyb fits right into the pattern defined by Morioka as typical for contemporary religious activity in Japan.
Memorials for Pets and Dolls
Finally, some comments concerning memorial services for pets and dolls. As we mentioned at the beginning of this article, memorials for animals and tools which are beneficial to human society have traditionally been the objects of "memorial" services. This practice continues today, in a sense, in the form of memorial services for pets and dolls. Of course memorial services for useful animals, such as farm animals, and tools used in one's work, also continue as before.
The belief that animals and even inanimate objects possess some form of "spirit" or "soul" is as common among the Japanese today as it was in the past. How does this relate to the current cases of memorial services for pets and dolls? We believe that memorial services carried out in contemporary Japan for pets and dolls are fundamentally different than those carried out in the past for animals. On the one hand the belief in the presence of spirits in these objects is the same. This is illustrated by the following story printed in the 9 September 1984 issue of the Asahi newspaper. The residents of Urawa City in Saitama Prefecture had submitted a petition to the city government that the dead bodies of animals not be disposed of and burned with regular waste material, and the city was considering the construction of a separate furnace specifically for the cremation of pets. According to our finding, about twenty of the cities in a certain suburban area of Tokyo do not at the present time burn the bodies of pets along with the garbage, but have commissioned the handling of such pets to a certain private dog and cat cemetery. It is clear that even government agencies in Japan are not comfortable with handling the bodies of pets as just so much garbage. The idea that "all sentient beings possess the buddha nature" is alive and well in modern Japan, but a fundamental difference has arisen in contemporary society concerning the place of animals. The basic unit for society today is the nuclear family, and these families have fewer members. Since there are fewer children, those children have fewer brothers and sisters, and often animals-pets-have become substitutes. In the case of older couples, animals become substitutes for chidren. The fact that animals are becoming "humanized" is reflected in the facts that their sphere of life is widening into areas formerly reserved only for human beings, such as their being considered a member of the family, and that there are now an increasing number of magnificent cemeteries and farnily columbariums run by pet professionals specifically for dogs, cats, or other pets after they die. This trend cannot be explained merely by the lack of space in the city to continue the practice of traditional society, or in the countryside, where pets were privately buried in one's garden or back yard.
The same can be said for the memorial services for dolls. Since ancient times d o h were considered vehicles for spirit possession, and often services (ningy6 okuri A % < ( ) were held to send the doll's "spirit" on to the next world, but in the present day we find a tendency to consider a doll as a member of the family, just as "animals" became "pets." The same "humanization" has occurred.
These animals and dolls, which in traditional society were considered by the community and workers as objects which contributed to the welfare of human society, are in contemporary society considered the private possessions of individuals, and are gradually becoming "humanized." It is at this point that we find commonality with mizuko kuy8. The characteristic relationships between contemporary people and religion outlined by Morioka can be seen here in that the funeral services for these pets are conducted by the individual owner apart from the aegis of the family temple; the relationship is temporary, as seen in the typical contract with a pet cemetery for only three years, and so forth.
We can conclude with the comment that although memorials for animals and tools by communities and professional groups continue as in traditional society, these kinds of memorials have split into two streams. The fact that memorials are now for pets and not merely "animals" is the most conspicuous aspect of such memorials in contemporary Japan. | 2018-12-21T02:10:52.323Z | 1987-11-01T00:00:00.000 | {
"year": 1987,
"sha1": "bffa700eb982399d64f7d9e98d0329a879ddb830",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18874/jjrs.14.4.1987.305-320",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fc61a7384c37627e4972d2c203f1e444b95c5843",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"History"
]
} |
134650015 | pes2o/s2orc | v3-fos-license | Analysis of Precipitation and Temperature Extremes over the Muda River Basin , Malaysia
Trends in precipitation and temperature extremes of the Muda River Basin (MRB) in north-western Peninsular Malaysia were analyzed from 1985 to 2015. Daily climate data from eight stations that passed high quality data control and four homogeneity tests (standard normal homogeneity test, Pettitt test, Buishand range test, and von Neumann ratio test) were used to calculate 22 Expert Team on Climate Change Detection and Indices (ETCCDI) extreme indices. Nonparametric Mann–Kendall, modified Mann–Kendall and Sens’ slope tests were applied to detect the trend and magnitude changes of the climate extremes. Overall, the results indicate that monthly precipitation tended to increase significantly in January (17.01 mm/decade) and December (23.23 mm/decade), but decrease significantly in May (26.21 mm/decade), at a 95% significance level. Monthly precipitation tended to increase in the northeast monsoon, but decrease in the southwest monsoon. Mann–Kendall test detected insignificant trends in most of the annual climate extremes, except the extremely wet days (R99p), mean of maximum temperature (TXmean), mean of minimum temperature (TNmean), cool days (TX10p), cool nights (TN10p), warm days (TX90p) and warm nights (TN90p) indices. The number of heavy (R10mm), very heavy (R20mm), and violent (R50mm) precipitation days changed at magnitudes of 0~2.73, −2.14~3.33, and −1.67~1.29 days/decade, respectively. Meanwhile, the maximum 1-day (Rx1d) and 5-day (Rx5d) precipitation amount indices changed from −10.18 to 3.88 mm/decade and −21.09 to 24.69 mm/decade, respectively. At the Ampangan Muda station, TNmean (0.32 °C/decade) increased at a higher rate compared to TXmean (0.22 °C/decade). The number of the cold days and nights tended to decrease, while an opposite trend was found in the warmer days and nights.
Introduction
Climate change is a well-known threat to the social, economic, and environmental spheres [1].The number and intensity of recorded natural hazards such as flood, drought, heatwave and wildfire have increased as climate change exacerbates.For instance, climate change caused the frequent occurrence of devastating heatwaves in northern Europe, Asia and many other places [2].Elsewhere, based on Kundzewicz et al. [3], the highest recorded annual flood loss occurred in China in 2010, causing a total loss of ~USD 51 billion.Quantifying of precipitation and temperature extremes in a specific location is therefore essential to understand the effects of climate change on natural hazards.
Current research on precipitation and temperature extremes, however, has been dominated by researchers in Australia, China, Europe, and United States [4,5].Besides that, a majority of the climate extreme trend research involved country scale assessment [6,7].Although these studies have greatly improved our understanding on the changes of climate extremes, the risk and trend varies considerably in different regions and need to be studied.Moreover, the relationship of climate pattern and natural hazards is complicated within a river basin, but relatively little research has explored the basin-scale characteristic of climate extremes.
Analysis in the changes of precipitation and temperature extremes in the upper Blue Nile Basin, Ethiopia was conducted by Worku et al. [8] using the extreme indices of Expert Team on Climate Change Detection and Indices (ETCCDI).They found the signs of climate change over the basin due to the increasing of climate extremes' events and trends.Similar ETCCDI indices have been applied in other basin-scale studies including Koshi River Basin [9] and Songhua River Basin [10].Most importantly, World Meteorological Organization (WMO) recommended an application of the ETCCDI extreme indices to allow a better comparison among climate extremes studies around the world.
Malaysian precipitation extremes studies were mainly conducted in national scale [11][12][13].Extreme precipitation during the monsoon flood season (December to February) over the east coast of Peninsular Malaysia were increased during the moderate La Niña events rather than the strong La Niña events [11].On a basin scale assessment, Tan et al. [14] conducted a comprehensive precipitation extreme changes analysis over the Kelantan River Basin from 1985 to 2014 using the ETCCDI indices.As reported, increasing trends were found in most of the evaluated indices over this basin, except the consecutive wet days (CWD) and consecutive dry days (CDD) indices.Such detailed basin scale assessment needs to be conducted in other Malaysian river basins as well.This is important for effective management of the river basins via understanding of the basin climate system and potential water-related hazards.
The Muda River Basin (MRB) located in north-western Peninsular Malaysia supplies freshwater resources to the states of Kedah and Penang for domestic, industrial and agricultural purposes [15].Based on Ghani et al. [16], flood hits the basin almost every year during the wet seasons.One of the major floods occurred in October 2003 and affected about 45,000 people.Besides that, flood and drought have also caused multi-million-ringgit losses of paddy each year.In a national scale assessment, the total precipitation (PRCPTOT) increased significantly in the period of June to August during the El Niño at two climate stations located in north-western Peninsular Malaysia [11].However, none of the evaluated stations were located within the MRB.Therefore, a basin scale assessment, which might be emerged contrary findings compared to the national scale assessment, is required.
The present study aims to provide a comprehensive analysis of the climate extremes changes over the MRB from 1985 to 2015.The spatial-temporal trends of the precipitation and temperature extremes were evaluated using the ETCCDI indices.This study contributes to a better understanding of the climate extremes changes in a typical tropical river basin.Moreover, the findings will act as a baseline for the projections of future climate extremes.Sections 2 and 3 describe the study area, materials and methods.Results and discussion are described in Sections 4 and 5, respectively.A brief summary of this study is given in Section 6.
Study Area
The MRB lies between 5 • 20 -6 • 20 N latitudes and 100 • 20 -101 • 20 E longitudes (Figure 1a,b).It has a drainage area of about 4111 km 2 .The MRB is located in a major part of Kedah and in minor part of Penang.So, both states have the right to withdraw water from the Muda River.The Muda Dam and Beris Dam are two major dams within the basin.The Muda Dam with a storage of 160 million m 3 [17], was constructed in 1969 under the Muda Irrigation Scheme for irrigating the paddy field in the south and south-eastern regions of Kedah.The Muda Agriculture Development Authority (MADA) is responsible to operate and maintain the dam.Meanwhile, the Beris Dam (122 million m 3 ) is operate and maintain by the Department of Irrigation and Drainage (DID).Perbadanan Bekalan Air Pulau Pinang (PBAPP), the only water supply company in Penang, is abstracting water from the downstream of the Muda River.In fact, Penang is highly dependent on the Muda River as more than 80% of the state's water comes from this river [18].Hence, any reduction on water resources of the MRB could directly affect the domestic and industrial sectors in Penang.
reduced by the mountain ranges in Sumatra, Indonesia.There are two precipitation peaks during the months of April and October in the 1985-2015 period, showing the inter-monsoon seasons bring more heavy precipitation compared to the NEM and SWM.Heavy precipitation of the inter-monsoon seasons is mainly brought by the local convective systems [13].The basin received annual precipitation from 2160 mm/year to 3000 mm/year during the period of 1985-2015.As shown in Figure 1c, the mean monthly maximum (Tmax) and minimum (Tmin) temperature at the Ampangan Muda station ranged from 30.9-34.5 °C and 21-23.5 °C, respectively.In 2016, a super El Niño event severely reduced the water storage capacities of the Muda Dam and the Beris Dam to critical levels of 45.2% and 38.3%, respectively [18].
Data and Quality Control
Observed daily precipitation data from 1985 to 2015 were collected from Malaysian Meteorological Department (MMD).A basic information of the nine stations is listed in Table 1.Application of unreliable observed climate data might cause wrong conclusions on the climate conditions, so data quality control and homogeneity tests were conducted to minimize the error.Data quality control involves removal of stations with inhomogeneous trend, i.e. those with more than Regional monthly precipitation of the MRB as measured by the mean of the eight stations shown in Figure 1c.Similar to other Malaysian river basins, the climate system of the MRB can be divided into the northeast monsoon (NEM, November to March), southwest monsoon (SWM, May to September) and two inter-monsoon seasons [19].The MRB is less affected by the NEM because of the Titiwangsa range blocks the heavy precipitation [13].Meanwhile, the SWM effect on the MRB is reduced by the mountain ranges in Sumatra, Indonesia.There are two precipitation peaks during the months of April and October in the 1985-2015 period, showing the inter-monsoon seasons bring more heavy precipitation compared to the NEM and SWM.Heavy precipitation of the inter-monsoon seasons is mainly brought by the local convective systems [13].The basin received annual precipitation from 2160 mm/year to 3000 mm/year during the period of 1985-2015.As shown in Figure 1c, the mean monthly maximum (Tmax) and minimum (Tmin) temperature at the Ampangan Muda station ranged from 30.9-34.5 • C and 21-23.5 • C, respectively.In 2016, a super El Niño event severely reduced the water storage capacities of the Muda Dam and the Beris Dam to critical levels of 45.2% and 38.3%, respectively [18].
Data and Quality Control
Observed daily precipitation data from 1985 to 2015 were collected from Malaysian Meteorological Department (MMD).A basic information of the nine stations is listed in Table quality control and homogeneity tests were conducted to minimize the error.Data quality control involves removal of stations with inhomogeneous trend, i.e. those with more than 10% missing and unreasonable values.The latter situation refers to extremely high or negative precipitation and temperature values, and the Tmin value is greater than the Tmax value.Homogeneity test is performed to detect and remove the inhomogeneous station from the trend analysis.Inhomogeneous climate data due to changes in instrumentation, environment and measurement approach might cover up the real climate condition [20].Four homogeneity tests (standard normal homogeneity test, Pettitt test, Buishand range test and von Neumann ratio test) recommended by Wijngaard et al. [21] were used to detect the homogenous trend of the daily precipitation data.Then, the stations were categorized into "useful", "doubtful" and "suspect" when the one or none, two and three or all tests rejects null hypothesis at 1% significance level, respectively.The XLSTAT-Time Series Analysis module, an add-in of Microsoft Excel that contains the four selected homogeneity tests were used.
Trend Analysis
General and extreme trends of the precipitation, Tmax and Tmin were conducted using the non-parametric Mann-Kendall (MK) test.This method is widely applied in hydro-climatic trend analysis [6,22].The null hypothesis of the MK test is "there is no trend in the time series" based on the fact that the data is randomly ordered and independent.A positive MK value indicates an increasing trend, while a negative MK value shows a decreasing trend.However, positive serial correlation in climate data will increase the probability of significant output, indirectly lead to a false trend [23].Therefore, a modified MK test introduced by Hamed and Rao [24] was employed to evaluate the trend of serial correlation data.The magnitude of the detected trends was calculated using the Sen's slope test.The missing values in the climate data does not affect the outputs as they are rank-based techniques [25].Regional trend was measured using the arithmetic mean of all stations over the basin [26].The trends were assessed at a 95% significance level.Detailed calculations of the MK and Sens' slope are available in several hydro-climatic trends analysis manuscripts [14,27].
Extreme Indices
A set of ETCCDI's precipitation and temperature extremes indices as listed in Table 2 was
Data Quality Control and Homogeneous Assessment
Table 1 shows that most of the precipitation gauges had less than 3% missing value, except the Pusat Pertanian Charok Padang station (~5.7% missing value).For the temperature data, the Ampangan Muda station is the only station within the basin with less than 10% missing value.It had a missing value of about 6.5%.The recorded daily Tmax and Tmin of the Ampangan Muda station varied from 23.5-38.6 • C and 15.2-25.8• C, respectively.On the other hand, the temperature data of the Pusat Pertanian Charok Padang station was excluded from this study because the missing value is about 12%.
Homogeneity result shows that the Ampangan Muda and Butterworth stations are labeled as "useful".The Butterworth station is one of the principal climate stations in Malaysia, which is well maintained and calibrated by the MMD staff [28].Meanwhile, the Ampangan Muda station is mainly used to monitor the climate condition around the Muda Dam, and therefore more attention is given by local authorities.The remaining stations were classified as "doubtful", except the Hospital Baling and Hospital Sungai Petani stations as "suspect".Two aspects were considered in removing the "suspect" stations: (1) location and (2) comparison with nearby homogeneous station.After a rigorous data quality and homogeneity assessment, the Hospital Sungai Petani station was removed from the trend analysis.3.As the annual precipitation changes can be represented by the PRCPTOT index which is further discussed in Section 4.3, this section is focused solely on the monthly precipitation assessment.Homogeneity result shows that the Ampangan Muda and Butterworth stations are labeled as "useful".The Butterworth station is one of the principal climate stations in Malaysia, which is well maintained and calibrated by the MMD staff [28].Meanwhile, the Ampangan Muda station is mainly used to monitor the climate condition around the Muda Dam, and therefore more attention is given by local authorities.The remaining stations were classified as "doubtful", except the Hospital Baling and Hospital Sungai Petani stations as "suspect".Two aspects were considered in removing the "suspect" stations: (1) location and (2) comparison with nearby homogeneous station.After a rigorous data quality and homogeneity assessment, the Hospital Sungai Petani station was removed from the trend analysis.3.As the annual precipitation changes can be represented by the PRCPTOT index which is further discussed in Section 4.3, this section is focused solely on the monthly precipitation assessment.A significant increasing trend of monthly precipitation was found in January and December at a 95% significance level, with changing rates of 17.01 mm/decade and 23.23 mm/decade, respectively.Interestingly, monthly precipitation tended to increase in the low precipitation season in the MRB.For example, Figure 3a shows 50% of the stations had a significant increasing trend in January, the month that received lowest amount of monthly precipitation of the basin, at a 95% significance level.A significant increasing trend of monthly precipitation was found in January and December at a 95% significance level, with changing rates of 17.01 mm/decade and 23.23 mm/decade, respectively.Interestingly, monthly precipitation tended to increase in the low precipitation season in the MRB.For example, Figure 3a shows 50% of the stations had a significant increasing trend in January, the month that received lowest amount of monthly precipitation of the basin, at a 95% significance level.Similarly, most of the stations within the basin showed an increasing trend in monthly precipitation in November, December, February, and April.A possible explanation for the increases of monthly precipitation during these months might be due to a warmer condition in Peninsular Malaysia in the past few decades [29].Higher temperature increased the water vapor in atmosphere and then, amplified the local convection processes [30].Therefore, more intense precipitation has been occurred in these periods.
Trend of Monthly Precipitation
By contrast, a significant decreasing trend was observed in May with a rate of 26.21 mm/decade.Monthly precipitation in March, July, August, September, and October showed a decreasing trend during 1985 to 2015, with magnitudes ranging from 5.74 to 18.81 mm/decade.Figure 3 shows the decreases of monthly precipitation are mainly observed the middle part of the MRB, during the SWM.Consistent with the present result, Wong et al. [31] also found a drier condition during the SWM months in the north-western part of Peninsular Malaysia since the last three decades.This result may be explained by the fact that the El Niño Southern Oscillation (ENSO) signatures are largely limited to the southern equator during the SWM due to the interaction between the background flow and the regional anomalous circulation [31].Therefore, a lesser impact was found in the MRB that is located in far northern Peninsular Malaysia.
Annual Trend of Precipitation Extremes
The spatio-temporal annual trends of precipitation extremes from 1985 to 2015 over the MRB are presented in Figures 4 and 5. Overall, most of the precipitation extremes indices denoted insignificant trends, except R99p with a significant decreasing trend, as listed in Table 4.The Ampangan Muda station is the only station that had significant increasing trends at both PRCPTOT and SDII indices, at a 95% significance level.The results showed that the Muda Dam received more precipitation in recent decades.However, strong El Niño events in 1997/1998 and 2016 still resulted in prolonged drought and water crisis in this region.Hence, effective precipitation collection strategies should be implemented to collect and store precipitation during non-El Niño periods.
For indices representing the number of extreme precipitation days, regional increasing trends were found for the R10mm, R20mm, and R50mm indices, with magnitudes of 1.02, 0.59, and 0.13 days/decade, respectively (Table 4).Increasing trends in R10mm and R20mm indices are detected at most of the evaluated stations, as shown in Figure 5b,c, respectively.The current study found that Water 2019, 11, 283 9 of 16 the number of extreme precipitation days are increasing over the MRB, which is in agreement with those obtained in nearby regions, i.e., Singapore [6] and Kelantan River Basin [14].The regional trend in CDD decreased with a magnitude of 2.69 days/decade, meanwhile CWD increased at a rate of 0.1 days/decade.CDD and CWD showed that the decreasing trends dominate over the north MRB.The only significant increasing CWD trend was observed at the Badenoch Estate station that is located in southern MRB.
In contrast to some Southeast Asia studies [6,25] that indicated tendency of wetter conditions due to increasing R95p and R99p trends, the MRB experienced drier conditions in the period of 1985-2015.This is proven through the reduction of precipitation amount contributed by the extremely wet days.R99p decreased with a rate of 32.37 mm/decade, which is significant at a 95% significance level.Whereas, the decreasing trend of the R95p was 39.76 mm/decade.Apparently, reductions of R95p and R99p were mainly observed at stations in southern MRB as shown in Figure 5g,h.
Annual Trend of Precipitation Extremes
The spatio-temporal annual trends of precipitation extremes from 1985 to 2015 over the MRB are presented in Figures 4 and 5. Overall, most of the precipitation extremes indices denoted insignificant trends, except R99p with a significant decreasing trend, as listed in Table 4.The Ampangan Muda station is the only station that had significant increasing trends at both PRCPTOT and SDII indices, at a 95% significance level.The results showed that the Muda Dam received more precipitation in recent decades.However, strong El Niño events in 1997/1998 and 2016 still resulted in prolonged drought and water crisis in this region.Hence, effective precipitation collection strategies should be implemented to collect and store precipitation during non-El Niño periods.For indices representing the number of extreme precipitation days, regional increasing trends were found for the R10mm, R20mm, and R50mm indices, with magnitudes of 1.02, 0.59, and 0.13 days/decade, respectively (Table 4).Increasing trends in R10mm and R20mm indices are detected at During the period from 1985-2015, Figure 4j,k and Table 4 indicate decreasing trends of the Rx1d and Rx5d indices of 4.2 mm/decade and 2.5 mm/decade, respectively.The spatial evaluation shows 75% of the stations had decreasing Rx1d trends, with a significant decreasing trend was found in the middle of the MRB, at the Pusat Pertanian Batu Seketol station (41549).Meanwhile, a significant Rx5d decreasing trend was observed at the Hospital Baling station (41545) in the south-western region of the basin.The finding is contrary to that of the Kelantan River Basin in northeastern Peninsular Malaysia where significant increasing trends of Rx1d and Rx5d were reported [14].These contrasting results may be explained by the geographical difference whereby the Titiwangsa mountain range separates the west coast and the east coast of Peninsular Malaysia.The Titiwangsa mountain range dramatically reduces the intermittent strong cold surges and north-easterly wind blowing from the South China Sea across the peninsular Malaysia during the NEM [11].
Trend of Temperature Extremes
The temporal trends of the temperature extremes at the Ampangan Muda station are listed in Table 5. Mann-Kendall test showed significant increasing trends in TXmean, TNmean, TX90p, and TN90p, whereas significant decreasing trends in TX10p and TN10p, at a 95% significance level.Annual TXmean and TNmean increased with a rate of 0.22 and 0.32 • C/decade for the period 1985-2015, respectively.The increment of TNmean was larger than TXmean, where significant increasing trends were found in each month (0.16-0.6 • C/decade), except February.Comparison of the findings with other Malaysian studies [29,32] confirms the warmer rate of TNmean in Peninsular Malaysia.The annual DTR exhibited a decreasing trend with a rate of 0.07 • C/decade, indicating the differences between TXmean and TNmean are getting smaller.On a monthly scale, the highest decrease magnitude was occurred in January, with a significant rate of 0.70 • C/decade.
For the cold temperature extreme indices (TXn, TNn, TX10p, and TN10p), increasing trends were observed in the coldest days (TXn) and coldest nights (TNn) by 0.31 • C/decade and 0.65 • C/decade, respectively.By contrast, the frequency of cool days (TX10p) and cool nights (TN10p) decreased by 2.3%/decade and 6.99%/decade, respectively.On the monthly scale, TNn increased significantly in almost every month (0.21~0.72 • C/decade), except February and May, indicating a warmer trend of coldest nights over the basin.Meanwhile, significant decreasing trends were found in TN10p in each month, except February, ranging from 2.91~8.16%/decade.
Discussion
The trend on the evaluated temperature extremes indices confirms warmer trends in the MRB, which is similar in nearby countries such as Indonesia [25] and western Thailand [33].It is interesting to note that most of the highest temperature extremes indices' values were recorded during the super El Niño in 1997/1998 in this region.The findings are similar to those of Manton et al. [34] who found a reduction in the number of the cold days and nights, and an increase in warmer days and nights in Southeast Asia.A possible explanation for the results may due to the agricultural land expansion and logging activities in this region.Deforestation could heighten the emission of carbon dioxide to the atmosphere, leading to more sun radiations reflect back the earth system.Large-scale deforestation in Southeast Asia leads to a hotter climate over the deforested area [35].
Availability of long-term high-quality climate data remain a critical issue in the Southeast Asia region.For example, the relatively short climate extremes' trends assessment period was found in studies in Singapore [6], Indonesia [25], Kelantan River Basin [14], and this study, mainly between 1980 to 2015.A good example is that opposite climate extremes' trends were found between the periods of 1910-1995 and 1961-1998 in Australia [34].Reliable climate stations do not exist before the late 1950s in some Southeast Asia countries.Therefore, more efforts should be conducted to understand the missing historical climate conditions.
Table 3 shows decreases in Rx1d and Rx5d have been detected in most of the months, ranging from 0.38 to 7.5 mm/decade, except January, February, April, and December.The only opposite trend was found in June, where the Rx1d had a decrease trend of 1.35 mm/decade, while the Rx5d was increased by 5.08 mm/decade.As the Rx1d and Rx5d are flood related indices, the spatial distribution of both indices was further evaluated in the flood period from September to November (Figure 6).A significant decreasing trend was found at the Hospital Baling station in September in both the Rx1d and Rx5d indices, at a 95% significance level.Most of the decreases Rx1d and Rx5d stations were mainly found in the middle and southern regions of the MRB.However, a single extremely high value of the Rx1d or Rx5d could cause massive damage to the basin.For instance, one of the highest Rx5d value (~500 mm) was recorded at the Butterworth station during the destructive 2003 flood.Moreover, climate change impact on the monthly streamflow of the MRB is expected to be more critical after 2040s [36].Therefore, several flood mitigation strategies have been proposed by Julien et al. [37] to reduce the flood damage in the basin.
Conclusions
This study evaluated the trends of precipitation and temperature extremes over the MRB from 1985 to 2015 using 22 ETCCDI's extreme indices.Trend and magnitude changes of high quality and homogeneous climate data from eight stations were analyzed using non-parametric Mann-Kendall, modified Mann-Kendall and Sen's slope approaches.The main findings can be summarized as follows: -Interestingly, monthly precipitation tended to increase significantly in January ( Although any drought conditions in Malaysia could be recovered within three months during the dry season [22,38], this natural disaster is still leading to agricultural losses in the region [39].This is mainly due to the high-water demand from this basin, especially for the paddy irrigation system.North-western Peninsular Malaysia is the "rice bowl" of the country, contributing about 38% of the rice production in Malaysia.Reduction of water level of the reservoirs due to the significant changes of the precipitation and temperature extremes would influence the paddy productivity.For example, increases of the TXmean and TNmean as listed in Table 5 would amplify the evaporation rate of the paddy fields.Local authorities are searching for solutions to reduce the dependency of Penang state on this basin.A possible solution would be exploring new water resources in nearby basins, i.e., Perai River Basin located in southern part of the MRB.Besides that, seawater desalination technology, one of the Singapore's national tap could also be one of the solutions to reduce the impact of climate extremes on water resources in this region.However, high installation, maintenance and operational costings might be a drawback of the desalination technology.
Conclusions
This study evaluated the trends of precipitation and temperature extremes over the MRB from 1985 to 2015 using 22 ETCCDI's extreme indices.Trend and magnitude changes of high quality and homogeneous climate data from eight stations were analyzed using non-parametric Mann-Kendall, modified Mann-Kendall and Sen's slope approaches.The main findings can be summarized as follows: -Interestingly, monthly precipitation tended to increase significantly in January (17.Consistent with the literature of Southeast Asia [34], the number of the cool days and nights of the basin tended to decrease, meanwhile increases were found in the warmer days and nights.Moreover, monthly precipitation tended to decrease in the SWM, but increase in the NEM.This study is important as a baseline to evaluate the potential future changes in the precipitation and temperature extremes over the basin.Further research might explore the application of the Coordinated Regional Climate Downscaling Experiments-Southeast Asia (CORDEX-SEA) climate projections in basin-scale climate extremes assessment.Application and validation of satellite precipitation products [28,40,41] in measuring climate extremes could also be considered in the future works.Impact assessment of climate extremes on regional crops productivity [42] such as paddy could also be one of the potential future studies.
Figure 1 .
Figure 1.(a) Muda River Basin, (b) Peninsular Malaysia, and (c) the climatology of precipitation, maximum and minimum temperature from 1985 to 2015.
Figure 1 .
Figure 1.(a) Muda River Basin, (b) Peninsular Malaysia, and (c) the climatology of precipitation, maximum and minimum temperature from 1985 to 2015.
used.Precipitation extreme indices can be categorized into two groups: (1) precipitation indices: PRCPTOT, R95p, R99p, Rx1d, Rx5d and SDII, and; (2) number of precipitation days: CDD, CWD, R10mm, R20mm and R50mm.The threshold of the user defined daily violent precipitation to 50 mm/day based on the WMO precipitation classification.Temperature extremes indices can be classified into three groups: (1) extreme temperature values: TXmean, TNmean and DTR; (2) warm extreme indices: TX90p, TN90p, TXx and TNx, and (3) cold extreme indices: TX10p, TN10p, TXn and TNn.Measurement of these indices was conducted using the RClimDex tool.Climate data quality control procedure as mentioned in the Section 3.1 was performed by the RClimDex tool.
Figures 2
Figures 2 and 3 show the spatial-temporal trends of monthly precipitation over the basin during the 1985-2015 period.The magnitude of monthly precipitation as measured by the Sens' slope test is listed in Table3.As the annual precipitation changes can be represented by the PRCPTOT index which is further discussed in Section 4.3, this section is focused solely on the monthly precipitation assessment.
Figures 2
Figures 2 and 3 show the spatial-temporal trends of monthly precipitation over the basin during the 1985-2015 period.The magnitude of monthly precipitation as measured by the Sens' slope test is listed in Table3.As the annual precipitation changes can be represented by the PRCPTOT index which is further discussed in Section 4.3, this section is focused solely on the monthly precipitation assessment.
Figure 3 .
Figure 3. Spatial changes of monthly precipitation over the Muda River Basin from 1985 to 2015.Black dots indicate significant trend at a 95% significance level.
1. Application of unreliable observed climate data might cause wrong conclusions on the climate conditions, so data
Table 1 .
Stations information and rainfall missing values * station contains Tmax and Tmin data.
Table 2 .
Precipitation and temperature indices used in this study.
Table 3 .
Magnitude of the monthly precipitation, Rx1d, and Rx5d indices from 1985 to 2015.Bold color indicates significant trend at a 95% significance level.
Table 4 .
Magnitude of the annual precipitation extreme indices from 1985 to 2015.Bold color indicates significant trend at a 95% significance level.
Table 5 .
Temperature extreme trends at station 41638 from 1985 to 2015.Bold color indicates significant trend at a 95% significance level.
17.01 mm/decade) and December (23.23 mm/decade), which are among the months that received lowest precipitation amount, at a 95% significance level.Meanwhile, a significant decreasing monthly precipitation was found in May with a rate of 26.21 mm/decade.
-Annual TXmean and TNmean at the Ampangan Muda station increased significantly at a rate of 0.22 and 0.32 °C/decade, respectively.Apparently, TNmean increased at a higher rate than TXmean.The differences between TXmean and TNmean are getting smaller due to the decreasing in DTR (0.07 °C/decade). | 2019-02-07T05:36:11.993Z | 2019-02-06T00:00:00.000 | {
"year": 2019,
"sha1": "4b557c04ea4e15413eb9fd45cdfddb57d04c27d8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/11/2/283/pdf?version=1549967067",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e918ed7758ad4c457af2fda78229385cbc8dbeef",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
} |
61458789 | pes2o/s2orc | v3-fos-license | Smart Manoeuvring in Mobile Robot for Anti-Personnel IED Detection
Most of landmine detection robots proposed so far have been strongly restricted from locomotion inside the minefield because they cannot cross over the mine. So we have proposed a mine detection robot with hybrid locomotion, which can enter minefield with low ground surface contact, which can cross over the mine instead of changing its path and scan landmines directly using EMI (Electro Magnetic Induction) sensor. The hybrid locomotion proposed in the robot uses the frame walking technique and the conventional wheeled locomotion. The robot switches over the locomotion mechanism from wheeled to leg when mine is detected and vice versa with a lead screw mechanism. The leg locomotion is achieved by frame walking technique where the two frames translate with the help of lead screw mechanism. Very purpose of adopting this combination is to evade anti-personnel landmines which are relatively smaller in comparison to their anti-tank landmine counterparts. With frame walking the robot passes over the mine instead of going around the mine. The robot initially starts in wheeled mode and upon detection of metal, pulls in the frame walking algorithm. The robot also deploys a obstacle avoidance algorithm when working in wheeled mode. Smart Manoeuvring in Mobile Robot for Anti-Personnel IED Detection
Introduction
In war situations and during military activities, Land mine detection plays a major role. With an estimated 100 million landmines buried in over 60 different countries around the world, landmines have proven to be one of the most serious obstacles to sustainable development in many of the world's poorest countries. According to the UNICEF, around 2,000 persons are involved in monthly landmine accidents, 800 (40%) of whom are innocent civilians; that is, an average of a victim every 20 minutes dies. According to the UN, even though about 100,000 mines are removed every year; two million more replace them. At the beginning of the 20th century, nearly 80% of landmine victims were military personnel. Today, 90% of landmine victims are civilians, most of whom are children. These mines, not only inflict physical and psychological damages on civilians, but also disturb the economic development of nations where buried mines abound, and prevent these countries from achieving socioeconomic stabilization.
Land mine is a weight-triggered explosive device which is intended to damage a target either human or inanimate by means of a blast or fragment impact. Landmines were mainly designed as area-denial weapons, and are used to create tactical barriers in order to prevent direct attack or to deny access by military and civilians from a defined area. Landmines are perfect soldiers that never eat, sleep, miss, fall ill or disobey. Moreover the landmine perfectly completes its job for much less U.S. dollars than the human soldier; in addition landmines are long-term killers, active long after a war has ended. Landmines which are a serious threat to the human society come in various forms namely Anti-Personnel (AP) mines and Anti-Tank (AT) mines. AP mines are designed to kill or injure enemies with their counterparts doing the work of exploding This project aims at developing hybrid locomotion robot for Anti-personnel Mine detection. Two ways of locomotion are considered namely legged and wheeled locomotion. This robot has special foot mechanism where there is a change over in locomotion from legged locomotion to wheeled locomotion and vice versa. The legged locomotion will cross over the mine instead of moving around the mine. Designing the stable mechanical structure of the hybrid locomotive robot is the main objective of this project. Legged locomotion is considered because the area of contact will be less while considering other locomotion. And with the legged locomotion, we can cross over the mine and we can manoeuvre legs more accurately. They will be stable and they can travel over the rugged terrain and they will be more efficiency than other locomotion. But drawback with the legged motion is that other than mine detection areas, distance covered with respect to time will be less so this the reason for need of hybrid locomotion. So next locomotion which we go for is wheeled locomotion, where this locomotion will be considerable fast and can be used for horizontal terrain regions. After the literature review design of legged locomotion mobile robot and wheeled locomotion mobile robots. Based on the literature reviewed, a design of stable mechanical structure for hybrid robot by analysing the best combination of legged and wheeled robot. The robot's mechanical structure is designed in such a way that it, with the help of its hybrid combination passes over the anti-personnel landmine without detonating it.
Techniques and Design
The Hybrid Locomotion robot comprised of five main parts, listed in front to back and its layout is shown in Figure 1. Outer frame with legs, wheels, arms and lead screw • inner frame with legs • Lead screws translating frames with guide ways • rack mechanism actuating legs • The technique adopted for hybrid locomotion such as combination of sliding and rolling mobility in a single platform to the robot system. The legs and wheels are assembled in such a way that the sliding and rolling motion to the robot is achieved independently and also simultaneously. The sliding motion of outer and inner leg assembly is done using the Lead screw mechanism. Which is useful for safe manoeuvering in the infected areas like land mines buried location especially in uneven surfaces. The robot platform consists of inner leg assembly and outer leg assembly with linear contacts so that the sliding motion to the robot is achieved. The wheel and arm assembly are integrated on the outer leg assembly in such a way to provide the rolling mobility to the robot.
The robot has a feature of sliding motion by legs which will provide low area of contact on the ground as well as the uneven of the ground has been taken care by adjusting the legs on the outer and inner leg assembly. In order to reduce the no of motors, the front and rear legs are operated using two motors each, meant that front both the legs will be combined with a beam and beam will possess the rack, such that legs can be triggered using rack and pinion mechanism. The wheel will be in ground contact when all the legs are raised up, further the robot moves by the triggering rack and pinion mechanism on front and rear of the outer leg base. In wheel mobility, the neutral turn and steering can be achieved by changing the direction of rotation and vary the speed of the wheels on sides of the outer base legs respectively.
The speed in walking will be low but it can easily manoeuvre the dangerous locations. In the case of wheeled mobility speed will be high but manoeuvrings will be difficult. By this invention the optimization of mission speed is achieved combining the merits of the sliding and wheeled locomotion combination to the robot. The Vol 8 (31) | November 2015 | www.indjst.org sliding locomotion will be utilized in dangerous field hence decoupling the path of the body and the sequence of legs motion will facilitate higher degree of mobility in a constrained environment and the other places the wheels will be in action so that speed is optimized for the specific mission. The robot with the combined actions of legs and wheel in specified sequence thereby achieve even the larger undulation of ground. The trench and vertical obstacle crossing has been done by the wheels but in some situation the legs and wheels need to act simultaneously.
Velocity, Force and Torque
For this hybrid locomotion robot, the motor has • rotational speed of 60 rpm under load and with the wheel diameter of 0.1m, calculation for the velocity will be will be Velocity = circumference × rpm = diameter × pi ×
Analysis and Stimulation
The modal analysis is carried out for the inner frame and outer frame separately and visualized the maximum deformation that can occur when the robot is subjected to load in various points. The Figure 1 represents the modal analysis of the inner frame, when it is subjected to the deformation with respect to the z axis. The Figure 3 represents the modal analysis of the outer frame when it subjected to the deformation with respect to the y axis.
Control Architecture
The architecture is adopted to suit the hybrid locomotion. The system follows behavioural approach with sensors to interpret the physical world, onboard microcontroller based control system to processing the collected data and for decision making and set of actuators to do the useful work Vol 8 (31) | November 2015 | www.indjst.org
Mode Selector
The mode selector constantly monitors the metal detection sensor output and the ultrasonic sensor output with the former having higher weightage than the latter. The metal detector's digital output is tied to the RBO/ INT pin of PIC 16F877A which is an external interrupt pin. Upon being interrupted the microcontroller calls the frame walking subroutine which involves a series of motor driving to negotiate the mine which in our case is a coin for experimental purpose. The metal detector also gives an alarm as soon as it detects a metal. When not interrupted through RBO, the mode selector authorizes wheeled mode which does navigation through ultrasonic sensor 1 .
Mode Selector
The mode selector constantly monitors the metal detection sensor output and the ultrasonic sensor output with the former having higher weightage than the latter. The metal detector's digital output is tied to the RBO/INT pin of PIC 16F877A which is an external interrupt pin. Upon being interrupted the microcontroller calls the frame walking subroutine which involves a series of motor driving to negotiate the mine which in our case is a coin for experimental purpose. The metal detector also gives an alarm as soon as it detects a metal. When not interrupted through RBO, the mode selector authorizes wheeled mode which does navigation through ultrasonic sensor 1 .
Motion Planning
When in frame walking mode, the motion planning algorithm guides the robot to negotiate the coins.
Mode Selector
The mode selector constantly monitors the metal detection sensor output and the ultrasonic sensor output with the former having higher weightage than the latter. The metal detector's digital output is tied to the RBO/INT pin of PIC 16F877A which is an external interrupt pin. Upon being interrupted the microcontroller calls the frame walking subroutine which involves a series of motor driving to negotiate the mine which in our case is a coin for experimental purpose. The metal detector also gives an alarm as soon as it detects a metal. When not interrupted through RBO, the mode selector authorizes wheeled mode which does navigation through ultrasonic sensor 1 .
Motion Planning
When in frame walking mode, the motion planning algorithm guides the robot to negotiate the coins.
Motion Planning
When in frame walking mode, the motion planning algorithm guides the robot to negotiate the coins. The knowledge base has the set of possible cases the robot can be compatible with. By case we mean the possible patterns of coin distribution on the floor. The case verifier refers to the knowledge base and checks if the current case is present or not. If it gets positive reply it initiates the action execution to perform the corresponding actuation. A sample case is explained below 2 .
3 mines in our case coins are placed in line horizontally. The robot starts from START. As soon as it reaches 1 st coin the metal sensor gives high. Then the robot which is still now in wheel mode continues to be in it, retraces its path by travelling for 't1' time. Makes a right turn which is its priority and travels 't2' time, thentakes a left turn and travels 't1' time 3 . And now if the sensor gives high it understands the situation and repeats the same to check 3 rd one. If its positive it aligns itself straight to 2 nd and switches to frame walking and passes over. The above case will only hold if 2 nd coin is placed closer such that it robot requires 't2' time to align to it. Otherwise the robot aligns it with 1 st coin and performs frame walking 5 . 3 mines in our case coins are placed in line horizontally. The robo coin the metal sensor gives high. Then the robot which is still now its path by travelling for 't1' time. Makes a right turn which is its turn and travels 't1' time 3 . And now if the sensor gives high it un check 3 rd one. If its positive it aligns itself straight to 2 nd and sw above case will only hold if 2 nd coin is placed closer such that it ro the robot aligns it with 1 st coin and performs frame walking 5 .
Obstacle Avoidance
With an on board ultrasonic sensor the robot performs obs algorithm is devised to be so simple to reduce the computational lo detection of an obstacle the algorithm measures the distance 'x' to and goes straight for 't1' time and then takes 20 0 right and goes str 45 0 and if x<x2 and x>x1 the angle is 90 07 . The knowledge base has the set of possible cases the ro possible patterns of coin distribution on the floor. The case current case is present or not. If it gets positive repl corresponding actuation. A sample case is explained below 3 mines in our case coins are placed in line horizontally. T coin the metal sensor gives high. Then the robot which is s its path by travelling for 't1' time. Makes a right turn whic turn and travels 't1' time 3 . And now if the sensor gives hig check 3 rd one. If its positive it aligns itself straight to 2 nd above case will only hold if 2 nd coin is placed closer such t the robot aligns it with 1 st coin and performs frame walking
Obstacle Avoidance
With an on board ultrasonic sensor the robot perform algorithm is devised to be so simple to reduce the computat detection of an obstacle the algorithm measures the distanc and goes straight for 't1' time and then takes 20 0 right and g 45 0 and if x<x2 and x>x1 the angle is 90 07 .
Obstacle Avoidance
With an on board ultrasonic sensor the robot performs obstacle detection and avoidance scheme 6 . The algorithm is devised to be so simple to reduce the computational load of the onboard 8 bit microcontroller. Upon detection of an obstacle the algorithm measures the distance 'x' to the obstacle. If x>x3, the robot takes 20 0 left and goes straight for 't1' time and then takes 20 0 right and goes straight.If x<x3 and x>x2 the orientation angle is 45 0 and if x<x2 and x>x1 the angle is 90 07 .
Hardware Configuration
The Metal sensor model 1139 from Sunrom technologies is used. It has a sensing range of 7cm 8 . It is fixed on an extended arm like structure in front of the robot. Apart from a logic 0 output via its out pin it also gives a buzzer and LED output upon detection of metal objects. Ping Ultrasonic sensor which has a range of 2 cm to 3m is used. The PIC 16F877A microcontroller is used as the control centre. With is RISC architecture it offers high performance. L293D based motor drivers are used for driving 7 d.c motors to achieve both modes of locomotion 9 .
Specification of the Robot
Weight of the robot = 5kg The Metal sensor model 1139 from Sunrom technologies is used. It has a sensing range of 7cm 8 . It is fixed on an extended arm like structure in front of the robot. Apart from a logic 0 output via its out pin it also gives a buzzer and LED output upon detection of metal objects. Ping Ultrasonic sensor which has a range of 2 cm to 3m is used.
Conclusion
In this paper the design and development of hybrid locomotion mobile robot which mimics a mine detection robot is presented. The design evolved considering proper choice of material , hassle free locomotion scheme, reduced friction between moving parts, weight, size, seat for onboard control system. In the later part the hardware control architecture is discussed. It aimed at developing a robust system which centres around a RISC computing device solves the problems of metal sensing, locomotion, obstacle avoidance. Each individual problems are briefly discussed. Experiments are being carried out to validate the system and the future prospects include upgrading the system to encounter uneven terrains with tracked wheel, maintaining stability while moving in uneven terrains. | 2019-02-15T14:19:14.576Z | 2015-11-14T00:00:00.000 | {
"year": 2015,
"sha1": "e2365303187b069305882b3d91dd1034ab2d10ff",
"oa_license": "CCBY",
"oa_url": "https://indjst.org/download-article.php?Article_Unique_Id=INDJST6572&Full_Text_Pdf_Download=True",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3950fc8702dd3eeecdb3b9e287bbf77eb94356a8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221699952 | pes2o/s2orc | v3-fos-license | Social pressures and reactions of adolescent drug users in an outpatient clinic
Coercive measures and social pressures may affect patients and the treatment for substance abuse disorder. This study analyzes the reactions of adolescents who use psychoactive substances to potentially coercive situations and its effects during treatment. The collected data were analyzed with mixed methods. Results show the prevalence of informal social pressures (48.1%). We classified patients’ reactions as acceptance (17.5%), resistance (31.6%), and lack of motivation (14%). Resistance and lack of motivation can affect the treatment and patients’ autonomy. The use of mixed methods was essential to analyze the medical records regarding senses and meanings and allowed us to quantify and compare the findings with the literature and the qualitative data.
Social pressures and reactions of adolescent drug users in an outpatient clinic
The Brazilian model of mental health care has undergone important changes since the 1990s. Treatments became more focused on community care, breaking away from the hospital-based model and emphasizing humane treatment, social reintegration, and service appraisal to assure the rights of people with mental illness 1 . Based on this new approach to treating mental disorders, Psychosocial Care Centers (Caps) were established in the country.
Conceived to provide community care services, these centers allow patients to receive follow-up locally and help them restore social connections by encouraging reintegration into the various social roles they once played 2 . These centers also received priority in the care of people with severe and persistent mental illnesses of different age groups, including those with substance use disorders (SUD) 3 . Although their implementation represents an enormous advance in community-based mental health care, there has been much criticism regarding the care provided [4][5][6] .
Psychoactive substance use along with other typical adolescence aspects and a poor and inefficient service network contribute to characterize adolescents as a highly vulnerable group [7][8][9][10] . This is particularly alarming as early psychoactive substance use anticipates consequences and losses associated with this issue, such as health problems, legal penalties, family and social conflicts, dropping out of school, and feelings of anxiety and guilt [11][12][13] .
Caring for drug users is currently one of the greatest challenges for public mental health care managers and professionals 14 . The stigma of aggressiveness and character defect surrounding substance users boosts support for legal pressure to compel individuals into treatment 15 . A risky approach in providing health services, since it potentially hinders understanding the patients' needs, resulting in a poor and coercive treatment 16 .
Perceive addictive behaviors as a burden, either in the public health context as in social and economic terms, renders pressures from different sources an integral part of the process of seeking addiction treatment [17][18][19] . As a way to deal with such costs, society uses different control strategies to ensure that substance users receive treatment 20 .
Social pressures are modes of social control, coercive or not 18,19,21 , classified into three types: legal (from judicial institutions); formal (from formal organizations, such as employers, schools, and social welfare programs); and informal (from family, friends, or acquaintances) [17][18][19] . For some authors, coercion and perceived coercion are synonyms, meaning that patients feel a lack of influence, control, freedom, or choice regarding treatment [22][23][24] .
Despite a lack of consensus on the effectiveness of social pressure in compelling individuals into treatment, Lidz and collaborators 25 state that feeling coerced negatively affects the patient. This leads to poor adherence to treatment and dropping out 18,[26][27][28] , loss of trust in caregivers, alienation, avoidance of treatment 25 , and lower patient satisfaction concerning new hospital admissions 29 . Identifying treatment-related ethical challenges helps to improve the quality of mental health care, to reduce the use of coercive strategies, and to increase patient's participation in the treatment 30 , in particular when dealing with young people, who suffer greater objective pressure and are more likely to report feeling coerced 31 .
Strategies such as enhanced communication between staff and patients, treatment negotiation, and explanation may help to improve how patients experience the admission process 32 . Identifying how adolescent patients react to pressures, building an empathetic rapport, reduces the social stigma associated with addiction treatment and the effect of social pressures, providing patients with more positive and beneficial treatment experience. This study aimed to analyze how adolescents who use psychoactive substances react when facing potential coercive situations during their treatment at a Psychosocial Care Center for Children and Adolescents (Capsia).
Method
This study is part of the research project "The trajectory of adolescent users of psychoactive substances at Capsia," approved by the Research Ethics Committee of Hospital de Clínicas de Porto Alegre. We conducted a cross-sectional study with 229 medical records of adolescent substance users receiving treatment at Capsia (Santa Cruz do Sul, Rio Grande do Sul, Brazil) from 2002 to 2012, describing their biopsychosocial profile and identifying risk factors for early drug use initiation 33 .
Data collection
Data were collected from medical records until data saturation 34,35 . The numbers (previously assigned by Capsia) of all the 229 medical records were organized into a spreadsheet and arranged in http://dx.doi.org/10.1590/1983-80422020282392 Social pressures and reactions of adolescent drug users in an outpatient clinic ascending order. A total of 23 (10%) medical records were randomly selected by sequential sampling 35one medical record was randomly selected by drawing lots among the first 10 numbers; then, based on this sequence, one was selected from every 10 records and analyzed by two of the authors.
The study subjects were identified by the letter "S" to maintain anonymity, followed by the corresponding number in the spreadsheet, the letter indicating the patient's gender (M, male; F, female) and age. The MacArthur Perceived Coercion Scale 36 helped to identify any entry or record information of potentially coercive situations, where behaviors or events suggested: 1) reduced freedom of choice regarding treatment; 2) being treated was not the patient's choice; 3) seeking treatment was not their idea; 4) lack of patient's control over treatment decisions; and 5) seeking treatment rested more on an external influence than on their choice.
Other situations during treatment included: police involvement, compulsory hospitalization or admission, use of drug containment, treatment opposition (resistance, escape, irritation), and social or parental pressure to enter treatment, as shows the literature on the subject 32,37-39 . Social pressures were then classified according to referral source (legal, formal, and informal) 18,19 , and passages describing patients' reactions to these coercive situations were identified in the medical records. Analysis and description of the quantitative data were performed in SPSS version 18.0 and QSR NVivo 10, and the qualitative data by content analysis 40 .
Results and discussion
This study analyzed the medical records of 23 adolescents undergoing treatment at Capsia. Most patients were male (87%), 69.6% of them had committed offenses, and reported using marijuana (34.8%), cocaine (34.8%), crack (30.4%), alcohol (26.1%), tobacco (17.4%), glue (13%), and ecstasy (4.3%). Similar data were found in a previous study analyzing the medical records of all adolescents in treatment at Capsia during the same period 33 . Therefore, the data collected are representative of the general population treated at this facility.
The analysis found 66 notes written by the staff, comprising 81 passages describing different social pressures the patients experienced. Only four records had no account of social pressure. The prevalence of informal social pressure (48.1%) ( Table 1) is consistent with the study by Urbanoski 18 , suggesting that family and friends often exert pressure on treatment-seeking choice. In the study conducted by Room 41 , alcohol and drug users identified family and friends as the most common sources of pressure to enter treatment, indicating that informal pressure precedes formal pressure. This prevalence of informal over formal and legal pressures contrast with most studies 42 and indicates that these pressures must be addressed and considered during the treatment process. In previous studies, participants identified as voluntary patients also reported some degree of perceived coercion 18,42,43 .
Social pressures and reactions of adolescent drug users in an outpatient clinic
The prevalence of resistance (31.6%) and treatment acceptance (17.5%) are consistent with the study by Lorem, Hem, and Molewijk 37 , who, after interviewing psychiatric inpatients, classified their reactions to coercive measures as agreeing and accepting, fighting or resisting, and resignation. However, their study focused on investigating patients' moral evaluation of previously experienced coercion rather than identifying and associating the pressures with the reactions they triggered.
We must emphasize that recording the reactions in the moment of pressure implies different understanding of these experiences than if they had been reported later, since different factors affect the patients' perception of coercion: the institutionalization period, feelings of gratitude, internalization of experiences 44 , personal characteristics 22 , understanding of the severity of the condition, and the degree of pressure experienced 18 . Nevertheless, to reflect on coercive strategies and their uses helps to prevent possible future damage to both patient and treatment, especially considering that these patients lack further opportunities to reevaluate their experiences due to poor adherence of psychoactive substance users to treatment 11,45 .
The passages describing patients' reactions resulted in 76 associations with social pressures, since one reaction could correspond to more than one pressure (Table 3). Resistance was most commonly associated with informal pressure (33.3%), also being frequent in formal (26.9%) and legal pressures (30%). Lack of motivation (30.7%) and resignation (25%) were most associated with formal and legal pressures, respectively. Omnipotence was linked only with legal pressure, while willingness to be treated was associated only with informal pressure, and denial of drug use was related with both types.
These findings show that enforcing forms of control rather than eliciting immediate positive responses from these patients increased their perception of coercion, resulting in the absence of a significant association between positive reactions and the types of pressure. It also indicates that these patients felt coerced by different sources to enter treatment. The qualitative data analysis dealt with the three most frequent reaction categories in the quantitative analysis of the observation notes (Table 2): resistance, lack of motivation, and treatment acceptance.
Resistance
Resistance means refusing to receive the proposed treatment, being associated with all three forms of social pressure and the most frequently reported by the patients: he arrives at the Capsia accompanied by a protection officer without the presence of a parent or guardian. The mother quarreled with her son and refused to accompany him to the health care facility. The adolescent reports having been admitted to several clinics, but none worked. He does not want to be admitted and does not want to stop using drugs. He says he uses drugs because he wants to (excerpt from the report on S133M17). This passage exemplifies a legal social pressure in action: the patient who refused admission was taken to the facility by a protection officer, a staff member of the Child and Youth Court responsible for search and seizure warrants, as defined by the Court of Justice of the State of Rio Grande do Sul 46 .
In this case, the adolescent was reluctant to accept hospitalization and seemed to regard the hospital treatment as inefficient, since several previous hospitalizations have not yielded positive results for him. His unwillingness to be admitted and decision to continue using drugs were not respected. The pressure used in this case seems justified by the perceived legal understanding, criticized by Wertheimer 47 individual's decisions. In this sense, his decision to continue using drugs was probably understood as "influenced" by the illness.
Informal pressure exerted in a context of conflict -between the adolescent and his mother, for example -also results in resistance reactions when patients no longer see a family member as someone who can demand something from them. Considering this association, Wertheimer 47 proposes that coercion varies according to the coercive agent's moral force: when the individual under pressure recognizes the coercive agent's right to make demands, the likelihood of feeling coerced decreases. In situations of poor or frayed family relationships, the coercive agent is no longer seen as someone who can impose or propose treatment. Patient S173M13 expresses the same situation of refusal and family conflict, though in a more subtle manner:
Situation 1: He joins the group for the first time. She is worried about her son's situation, who left home and school. He has no limits, disrespects her, and sleeps in a bathroom in the back of their house. He stole all his grandmother's belongings.
Today he agreed to join the group (on S173M13's mother).
Situation 2: First day in the group meeting. He was reluctant to come, kept silent and held his head down during the entire meeting (report on S173M13).
Accounts of social damage, linked to psychoactive substance abuse, such as dropping out of school, theft, and exposure to degrading situations (sleeping in the bathroom, for instance), seem to explain the informal pressure. According to Wild, Roberts, and Cooper 20 , society often uses control strategies to treat substance abusers because they represent a social and economic burden. Bittencourt, França, and Goldim 33 described a similar situation when associating social and health problems with the referral of male adolescents with multiple offenses for treatment at Capsia. Another issue associated with resistance is the avoidance of treatment through escape: Situation 1: A call from the shelter informing that [S123M17] was sent there by the judge until his hospital admittance for detoxification. (…) I call the hospital and get a bed for today. The transfer will occur in the afternoon, and a Capsia employee will accompany the patient (report on S123M17).
Situation 2:
A person from the hospital calls to report that [S123M17] escaped. He was found by police officers and taken back to the hospital. He keeps saying he will run away again and stay in the hospital for as long as he wants (report on S123M17).
These excerpts show two interactions between the Capsia staff and other patient care facilities. First, the adolescent suffers legal pressure to enter treatment; then, after admission, the hospital nurse (formal pressure) calls to inform the Capsia staff (formal pressure) of his escape. The patient shows clear opposition to the situation: he cannot refuse to participate in the hospital treatment, so he escapes and is recaptured by the police, and continues threatening to escape again. In this sense, coercion may involve different social pressures: She spent the night at home and slept well, according to her mother. She arrives at Capsia showing resistance, refuses to enter the consultation room, states that she no longer wants to be hospitalized and wants to continue using drugs. She physically and verbally assaults her mother, says she will "smash everything" at the hospital, and escapes. I write requesting a search warrant for tomorrow (report on S153F16).
Here the patient's resistance and later escape is motivated by her mother's informal pressure and by the health professional's formal pressure, when requesting judicial intervention. In addition, we have an angry reaction, represented by the physical and verbal assault and threats to "smash everything" at the hospital.
Lorem, Hem, and Molewijk 37 grouped fighting and resisting reactions into the same category, as both are associated with the patients' lack of control over the treatment and loss of autonomy: struggle and resistance are common in situations where the patient perceives coercion as a form of threat. This may not be the case here, but the pressures may have been interpreted in this way by the patients.
Resistance to care seems to represent a struggle for self-determination, in which the reaffirmation of the desire to continue using drugs is employed to escape treatment. When it loses effectiveness, patients use other available defenses: escape, aggression, and silence. Marked by the presence of physical, verbal, and possibly psychological abuse, resistance questions the effectiveness of such care, especially when the patients show clear opposition.
Lack of motivation
Motivating patients to engage in treatment is one of the greatest challenges in mental health care. According to the self-determination theory 48 , motivation depends on personal, social, and environmental factors. A person may show personal interest in performing an activity (intrinsic motivation), may perform an activity without engaging (amotivation), or may do it aiming at the result (extrinsic motivation).
Thus, lack of motivation means poorly integrating and internalizing the importance of treatment, sought as a result of external pressure. Such behavior corresponds to extrinsic motivation regulated by external factors, where the individual enters treatment to relieve these pressures or escape possible sanctions 18 .
According to Urbanoski 18 , most patients undergoing treatment for substance use disorders drop out because care became meaningless. This lack of motivation to engage in treatment, similarly to resistance, affects the whole process, mainly in terms of adherence: the mother has kept him locked at home since Sunday (…) to prevent drug use. She asked the Child Protective Services [CPS] for help. (…) The patient is sleepy and apathetic. He remained silent during the interview (report on S93M13).
Here, both the mother (informal pressure), who locked the adolescent at home to prevent him from using drugs, and the Child Protective Services (formal pressure) sought to engage the adolescent into treatment, without results. Despite no further information on the patient, his silence during the interview indicates a lack of personal interest in the treatment. The report on S43M14 is similar: he and his mother came for the initial interview accompanied by a CPS agent, who did not participate in the interview (he only brought the referral papers). The mother reports her interest in continuing with the treatment; [S43M14] only seems to be complying with the court order.
The patient, brought in by his mother and the CPS agent, also experienced the legal pressure of a court order. The healthcare team noticed the patient's disinterest in the treatment, highlighting that the mother is the one willing her son to be treated in the health care facility. The observation note included in S13M17's medical record is even clearer: The patient says that he was absent last week because he lacks the motivation to engage in treatment and believes he can consume alcohol moderately. He came today because "CPS made me." He explains that "I have to come," otherwise "the judge sends me to (…)". He seems annoyed and unmotivated to continue treatment. He asks if 2 or 3 weeks is enough to meet the CPS and judge's requirement (report on S13M17).
This report shows the patient's complete lack of motivation, and that he is there only due to external pressures -once they cease, he intends to stop the treatment.
All these attempts show that engaging in treatment is often an extrinsic motivation regulated by multiple external factors such as pressure from friends and family, the legal system, and other formal sources 17 . According to Ryan and Deci 48 , this shows that seeking treatment is not a selfdetermined act, as behaviors motivated solely by external factors reflect poor individual autonomy, feelings of alienation, and loss of control.
Acceptance
In our study, acceptance of treatment and proposed interventions relied on the relationship established between patient and coercive agent.
The patient arrives with his mother. He says that it is hard to stop drinking. He drank every day last week. He drinks shots in the morning to stop shaking and at night to fall asleep. He says that he talked to his girlfriend over the weekend and she said that, if he quits drinking, she will come back to him. This week he will attend a Narcotics Anonymous meeting because he found out it is close to his home. The mother says that the family cannot handle the situation, her married children do not visit her anymore because they do not want to see the problem. The mother feels exhausted. She wants him to be hospitalized (report on S3M16).
A set of informal pressures -his girlfriend willingness to resume their romantic relationship, the potential to restore a good family relationshipand the patient's awareness on the severity of alcohol addiction may have facilitated his positive reaction to pressure. Another patient (S153F16) perceived her friends' pressure to stop using crack as positive, indicating her willingness to stop using psychoactive substances.
A study by Goodman, Peterson-Badali, and Henderson 49 showed that family members, romantic partners, and friends often pressure patients to seek treatment and reduce psychoactive substance use. As friendship and romantic relationships are essential in adulthood, such individuals can more Patients also accept treatment after pressure by professionals from different institutions. S193M11, for example, sought treatment after a referral from the school: he has been using marijuana for about two months. He was referred because was caught smoking at school. He has a good relationship with classmates and teachers and has good grades. Good family relationship (…). Behavior during the interview: collaborative, good insight (report on S193M11).
In this case, the substance user not only accepts the treatment but actively participates in ita positive attitude influenced by his good relationship with different social groups. Schenker and Minayo 12 state that family and school are crucial in enhancing resilience and promoting critical reflection on drug abuse; but they must be established in a way to strengthen ties of trust. Wei and collaborators 51 also indicate social bonds as factors for motivating change and helping adolescents to maintain abstinence.
However, accepting treatment does not mean the patient understands the process as important or necessary: he arrives with his grandmother. He says he came looking for help because of alcohol use. He says he will get married and become a father soon, so he wants to "get his life back on track." He says he came because the social worker asked him to and that he had no desire to come spontaneously (report on S13M17). S13M17 admits seeking treatment only after external pressure (social worker), and the report shows how fundamental his positive relationship with the coercive agent was for accepting treatment. On this issue, Lorem, Hem and Molewijk 37 showed that pressure was more easily accepted when the patient trusted the coercive agent. These findings are consistent with the research by Rugkåsa and collaborators 52 , who indicate that a trusting relationship between patients and health professionals is a prerequisite for negotiating treatment or influencing the patient to engage in the process. For the authors, a good relationship must value the patient's concerns and priorities.
Goodman and collaborators 50 state that young people who regard themselves as having greater responsibility toward others are more likely to recognize their addiction as problematic and to seek change. In these cases, patients perceive these relationships not as coercive or treatmentrelated, but as self-determined and justified by the losses involved. In the S13M17's case, acceptance is associated with the idea that he must "get his life back on track" for his unborn child and family.
A good relationship with friends, family, health professionals, and other individuals who are part of the patient's social network appears to evoke these individuals' commitment, leading to greater acceptance of interventions, facilitating engagement in treatment, and providing opportunities for reflecting about their lives. This is ultimately a personal decision, but acceptance may occur through the patients' perception that coercive agents have the right to require them to undergo addiction treatment.
Final considerations
Data analysis showed that social pressures are common in the treatment of adolescents who use psychoactive substances, mainly the prevalence of informal pressures, a finding consistent with the literature 18,19,31,41,49 . Vulnerability conditions observed in these young patients, characterized by substance use, dropping out of school, offenses, and contentious family relationships, are often used to justify different forms of coercion.
Although acceptance was one of the most frequent reactions, negative reactions prevailed when the association between reactions and social pressures was analyzed, giving way to resistance, lack of motivation, and resignation. Resistance appeared in the face of conflicting relationships with the coercive agent and when the patient's wishes were disregarded or disrespected. Lack of motivation, in turn, emerged from seeking treatment solely because of the experienced pressure, resulting in low treatment adherence.
A good relationship between patients and coercive agents stood out when analyzing acceptance reactions. This relationship allowed a less confrontational environment and greater engagement by providing opportunities to reflect on the need for addiction treatment.
This study allows a better understanding of these patients' experiences by showing that social pressures affect these individuals in different ways. The importance of a good relationship between patients and caregivers, and between patients and other individuals of their social circle, emphasizes http://dx.doi.org/10.1590/1983-80422020282392 Social pressures and reactions of adolescent drug users in an outpatient clinic that health professionals should act alongside them to manage conflicts and deal with these adolescents.
Researchers and addiction-treatment institutions should give further attention to informal and formal pressures, as they affect patient and treatment, considering that patients' reactions when facing different potentially coercive situations are mostly negative.
Basing the study on the medical records was the appropriate choice, as these documents represent a great source of information, but the sole use of external elements identified as potentially coercive posed its limitations. Recorded by different staff members, the data may have presented disparities, incompleteness, and omissions. The medical record review also precluded access to the patient's account of the events and how they experienced them. Further studies including direct interviews with psychoactive substance users are important for a better understanding of how social pressures affect them, as it allow patients to recount their experiences and reasons for seeking treatment.
For their financial support, we would like to thank the Coordination for the Improvement of Higher Education Personnel (Capes) and the Productivity Scholarship Program of the Cesumar Institute for Science, Technology, and Innovation (Iceti). | 2020-07-02T10:06:27.386Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "2b98812be994c16183d5226323b8653da8b0863a",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bioet/v28n2/1983-8042-bioet-28-02-0297.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "86a670f39b0e2c3fe9717461f49bd9eb4e6022b3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245470677 | pes2o/s2orc | v3-fos-license | Environmental control of marine phytoplankton stoichiometry in the North Atlantic Ocean
Significance As they grow, die, and sink into the ocean’s interior, oceanic phytoplankton drive the so-called biological carbon pump, one of the main biological processes regulating atmospheric carbon concentrations. The biological carbon pump is, therefore, key to climate regulation. Its efficiency is largely determined by the coupling of marine biology to ocean geochemistry through the C:N:P:Fe stoichiometry of phytoplankton biomass, yet what determines this stoichiometry remains poorly understood. Based on a model of plankton biology, we characterize control mechanisms of the C:N ratio of phytoplankton biomass in the North Atlantic, which explain extensive sets of apparently conflicting observations. These findings could improve the predictive ability of global ocean models regarding climate change and the role of marine biology in its mitigation/aggravation.
T he elemental composition of phytoplankton biomass, particularly its C:N:P:Fe ratio, is a crucial aspect of ocean biogeochemistry (1)(2)(3)(4). The stoichiometry of biological uptake-relative to the ratio of supply-determines which nutrients are limiting to growth. Additionally, the ratio of carbon to the limiting nutrient in organic matter determines the maximum amount of carbon that can be fixed and exported for a given nutrient supply. Phytoplankton stoichiometry is therefore a key component of the efficiency of the biological carbon pump, and as such, understanding its sensitivity to environmental change is key to understanding ongoing changes to the marine ecosystem and climate (3,4).
Although phytoplankton stoichiometry appears to be relatively well constrained when averaged over the global ocean, trends have been observed across a range of spatiotemporal scales (1)(2)(3)(4)(5)(6)(7). This suggests some level of environmental control, with significant correlations between the stoichiometry of marine organic matter and environmental and biological factors (4)(5)(6)(7)(8)(9). However, strong intercorrelations between light, temperature, nutrient availability, and planktonic diversity makes it difficult to identify which (if any) of these relationships are causal (4). This difficulty is also partially attributable to an incomplete understanding of the biological mechanisms involved. For example, it remains unclear whether the observed variations result from the physiological (plastic) response of phytoplankton organisms to the variability of their environment, from ecological shifts between competing phytoplankton populations, or from the adaptive evolution of phytoplankton traits.
To address these questions, we use a simple model of plankton ecophysiology. Based on experimental observations (10)(11)(12)(13), the model resolves how temperature and nutrient availability shape the physiology of phytoplankton organisms depending on their size (i.e., how they influence nutrient uptake and division rates and hence phytoplankton stoichiometrysee model description and SI Appendix, Supplementary Discussions SI1 and SI2). Nitrogen (N) being the most limiting nutrient to phytoplankton growth over most of the global ocean, including in the North Atlantic (14,15), we here focus on C:N stoichiometry. Based on this parametrization of phytoplankton physiology, the model resolves the ecology and evolution in size of phytoplankton and zooplankton populations involved in competitive and trophic interactions. This formulation allows the emergence of the functional composition of plankton communities according to well-established ecophysiological constraints and environmental conditions (16)(17)(18)(19).
We use this model to disentangle the respective role of N availability, temperature, and additional environmental drivers in controlling the C:N ratio of phytoplankton biomass and to identify the biological mechanisms involved at different temporal and spatial scales by confronting the model predictions with multiple observational datasets.
Results
Ecophysiological Basis of Phytoplankton C:N Ratio. To illustrate the ecophysiological mechanisms underlying the environmental control of phytoplankton stoichiometry, we simulate a highly idealized, initial scenario: a single population of phytoplankton Significance As they grow, die, and sink into the ocean's interior, oceanic phytoplankton drive the so-called biological carbon pump, one of the main biological processes regulating atmospheric carbon concentrations. The biological carbon pump is, therefore, key to climate regulation. Its efficiency is largely determined by the coupling of marine biology to ocean geochemistry through the C:N:P:Fe stoichiometry of phytoplankton biomass, yet what determines this stoichiometry remains poorly understood. Based on a model of plankton biology, we characterize control mechanisms of the C:N ratio of phytoplankton biomass in the North Atlantic, which explain extensive sets of apparently conflicting observations. These findings could improve the predictive ability of global ocean models regarding climate change and the role of marine biology in its mitigation/aggravation. with a fixed size (from now on expressed as equivalent spherical diameter, or ESD), exploiting a steady influx of nutrient. If nutrients are scarce, uptake by the phytoplankton is slow relative to the maximum achievable rate of cell division. As a result, the phytoplankton nutrient content drops, and the C:N ratio increases (Fig. 1A). Conversely, if nutrients are abundant, uptake is fast, and division becomes limiting to phytoplankton growth. Nutrients accumulate within the cell, and the C:N ratio drops. Variation of phytoplankton C:N stoichiometry is bounded by maximal and minimal values, respectively, determined by the minimal structural content of the phytoplankton cell in nutrient, Q min , and by the intracellular dynamical equilibrium between saturated uptake (V max ) and division (μ max ) rates in conditions of nutrient excess (SI Appendix, Supplementary Discussion SI1). The size dependence of phytoplankton physiological properties predicted by the model can be summarized as follows: 1) the larger phytoplankton is characterized by a higher stoichiometric plasticity, 2) the maximum C:N ratio increases log linearly with phytoplankton size (as Q min decreases with size), and 3) the minimum C:N ratio is maximized by intermediate cell sizes.
Although this stoichiometric plasticity is not directly influenced by temperature, indirect effects emerge as nutrients are drawn down to lower levels in warmer environments (as represented by their temperature-dependent R Ã N value-see SI Appendix, Supplementary Discussion SI1). By accelerating phytoplankton metabolism, higher temperatures result in 1) larger ranges of viable phytoplankton sizes, 2) overall lower R Ã N , and 3) R Ã N being minimized by phytoplankton of smaller size (ESD of 2 to 3 lm at 5°C and 0.3 to 0.4 lm at 25°C; Fig. 1B).
As it sets equilibrium conditions of nutrient availability (Fig. 1B), we predict phytoplankton size to have an indirect effect on phytoplankton stoichiometry (Fig. 1C). Small phytoplankton can deplete inorganic N to very low R Ã N values but are characterized by low stoichiometric plasticity and low, structural C:N ratios. On the other hand, large phytoplankton can exhibit very high C:N ratios in nutrient-depleted conditions but have a limited ability to generate such conditions. As a result, and as previously observed in laboratory experiments (10), C:N ratios (in monocultures at equilibrium) are maximal for phytoplankton of intermediate sizes, who have both the capacity to deplete N and the physiological plasticity to be affected by it. The size associated with maximal C:N, furthermore, shifts toward larger organisms at higher temperature ( Fig. 1C), as the capacity of larger phytoplankton to deplete nutrients increases with temperature (Fig. 1B).
When comparing our predictions to previous observations (10,(20)(21)(22), we note that attempts to characterize the in situ link between cell size and C:N stoichiometry may often be confounded by uncontrolled variations in environmental conditions and taxonomic differences across size classes. For instance, results from Lomas et al. (20) and Martiny et al. (6) suggest that cyanobacteria exhibit in average higher-C:N ratios than the larger eukaryotes they coexist with, while Baer et al. (21) seem to show the opposite, and Garcia et al. (22) remain inconclusive regarding the existence of a link between size and C:N stoichiometry among eukaryotes. Those claims can, however, be reconciled when considering that larger eukaryotes can exhibit both very high or very low C:N ratios depending on the environmental conditions, which is both confirmed by the review by Tanioka et al. (23) and predicted by our model. Additionally, our predictions are both supported by experimental data presented in the Marañon et al. study (10) used to parametrize the size dependence in our model and conserved when using alternative datasets (11) to parametrize size dependence (SI Appendix, Supplementary Discussion SI5).
Seasonal Control of Phytoplankton C:N Ratio in Oligotrophic
Systems. Having established the principal physiological mechanisms in play at the cellular level at equilibrium, we next consider the implications of these mechanisms within a seasonally varying environment, here an idealized representation of the seasonal cycle at the Bermuda Atlantic time series study site [BATS dataset (24); Fig. 2]. This overall oligotrophic system is characterized by a mild seasonal variation of temperature ( Fig. 2A) and by generally nutrient-depleted conditions interrupted by a nutrient pulse during wind-driven winter mixing (Fig. 2B). Observations show that the C:N stoichiometry of the particulate organic matter is at its lowest during the winter (January through March) and at its highest during the nutrient-depleted summer (June through September; Fig. 2C). These observations thus suggest either a negative relationship between the N availability and the C:N ratio or a positive relationship with temperature. As shown by our exploration of the ecophysiological basis of phytoplankton stoichiometry, our model provides a mechanistic explanation for both those relationships. Increasing the model complexity only minimally, we simulated the dynamics of a single population of small nanoplankton [representing the $2-lm diameter phytoplankton that dominate this habitat (25)] controlled by a single population of zooplankton. With seasonal temperature and nutrient input forcing parametrized according to field data (24), our model predicts the C:N stoichiometry of the phytoplankton to match almost perfectly with the monthly median of the observations. We then ran the model with each of the two environmental drivers varying in isolation throughout the year (the other one remaining constant). We found that while seasonal variations of nutrient alone could reproduce the observed variations in C:N stoichiometry, those of temperature could not (SI Appendix, Supplementary Discussion SI6). This suggests a predominant role of N availability relative to temperature in controlling the C:N stoichiometry of phytoplankton biomass. Our results, therefore, confirm the previously described (23,(26)(27)(28)(29) negative relationship between nutrient availability and C:N ratio in oligotrophic systems and highlight stoichiometric plasticity as the most plausible biological mechanism. It has been previously suggested that this relationship could be used to extrapolate the spatial distribution of phytoplankton stoichiometry in the ocean (27)(28)(29). This can be reasonably done only if the plastic response of the phytoplankton to N availability is the principle driver of phytoplankton stoichiometry across large spatial scales. In the following section, we will test whether this is the case.
Latitudinal Control of Phytoplankton's C:N Ratio in the North
Atlantic. To test this hypothesis, we looked at the spatial variation of the C:N stoichiometry across the North Atlantic based on a large collection of data broadly distributed across the North Atlantic basin compiled by Martiny et al. (7) from several studies, completed with Atlantic Meridional Transect (AMT) data, including observations made at higher latitudes (SI Appendix, Fig. 6A). Fig. 3 shows the observed C:N ratio as a function of latitude in the North Atlantic. The C:N ratio shows an oscillating latitudinal pattern, with peaks at 0 and 40°N and much lower values found in the oligotrophic subtropical gyre at 20°N (SI Appendix, Fig. 6B and SI Appendix, Supplementary Discussion SI7) and in the polar and subpolar regions. This latitudinal pattern reveals a more complex relationship between nutrient availability and C:N stoichiometry than a straightforward negative link to nutrient supply-the N to C:N relationship appears to be positive between 0 and 40°N, becoming negative at higher latitudes. Two potentially complementary hypotheses can explain this apparent contradiction between observations made at the local and global scales. First, other environmental factors, not contributing to variability in the Bermuda system, could be at play when considering the oceanwide distribution of the C:N stoichiometry. Could temperature, for instance, play a role in the latitudinal control of phytoplankton stoichiometry? Second, phytoplankton plasticity might not be the only biological mechanism involved in the environmental control of phytoplankton stoichiometry. The shifts in the size composition of the phytoplankton with latitude in response to latitudinal shifts in environmental conditions is a well-studied phenomenon (30). Can ecoevolutionary shifts in the size composition of plankton communities then be responsible for the observed pattern of latitudinal variation of the phytoplankton C:N ratio?
We ran our model of the plankton community to simulate the ecoevolutionary emergence of the size composition of those communities along an idealized, environmental gradient representative of the North Atlantic transect (31) (Fig. 3B). Our model predicts how the environmental conditions (temperature and nutrient availability) dictate the range of viable phytoplankton size (i.e., that could colonize the system in the absence of competitors and predators) along the gradient (Fig. 3C and SI Appendix, Supplementary Discussion SI3). It then predicts how ecoevolutionary processes drive the emergence within this size range of the size distribution of the phytoplankton community (see model description; Fig. 3D). Those predictions are in line with field observations (30). The size composition of the phytoplankton community determines, with the local temperature, the phytoplankton ability to exploit the local nutrient input, hence, the nutrient availability at equilibrium R Ã N along the transect ( Fig. 1B and 3B). The local R Ã N , in turn, determines the stoichiometry of each phytoplankton population given its characteristic cell size (Fig. 3C). When averaged over the whole community, the latitudinal variation of the phytoplankton C:N stoichiometry predicted by our model qualitatively matches the observations (Fig. 3A).
Below 40°N, we find regions characterized by highernutrient influxes (equatorial upwelling and northern boundary of the subtropical gyre) to exhibit higher C:N ratios because they are dominated by phytoplankton of intermediate to large cell sizes (Fig. 3D) experiencing low R Ã N conditions, hence, characterized by large C:N ratios (Fig. 3C). Those low R Ã N are the result of warm temperatures, increasing the overall ability of the phytoplankton, and of the larger size-classes especially, to consume nutrient (Fig. 1B). Regions characterized by low nutrient influxes (subtropical gyres, i.e., $20°N) show lower C:N ratios because the larger size classes, capable of achieving higher C:N ratios, are excluded because of the nutrient scarcity (Fig. 3D). This result, therefore, suggests that, contrary to the seasonal variation of C:N in oligotrophic systems, the latitudinal variation of the C:N ratio is driven by ecoevolutionary shifts in the phytoplankton functional composition, leading to the opposite relationship between the N availability and C:N ratio.
Polewards of 40°N, however, we see the opposite trend, with C:N ratios declining as nutrient levels increase. We find this trend to be driven by the poleward decline in temperatures and increase in grazing pressure. Although the size composition of the phytoplankton communities varies relatively little north of 40°N (Fig. 3D), the metabolism of the phytoplankton slows down as temperature drops, and phytoplankton populations decline in their ability to exploit incoming nutrients (regardless of their size; Fig. 1B). Similarly, the increase in nutrient input with latitude strengthens the top-down control of the phytoplankton by the zooplankton, and nutrient assimilation by the phytoplankton becomes less effective proportionately to the nutrient input (32) (see a more detailed discussion regarding the latitudinal distribution of those two effects in SI Appendix, Supplementary Discussion SI8). Nutrient concentrations at equilibrium are, therefore, predicted to be higher in polar latitudes, which in turn results in a lower C:N stoichiometry of the phytoplankton (Figs. 1C and 3C).
We therefore find the North Atlantic latitudinal transect to be structured in three distinct regimes: 1) regions characterized by high-nutrient influxes and by moderate to high temperatures-in which the C:N ratio is high (equator and latitudes between 35 and 45°); 2) regions of low C:N ratios due to low nutrient influx (subtropical gyres); and 3) regions of low-C:N ratios due to low temperatures and strong grazing pressure (high latitudes). Temperature and grazing pressure are, alongside N availability, prevalent factors of determination of phytoplankton stoichiometry in the North Atlantic.
Discussion
Our results shed light on the complexity of the environmental control of phytoplankton stoichiometry. This control is characterized by a diversity of environmental drivers and biological mechanisms, interacting in a complex fashion that varies across spatiotemporal scales. Nitrate availability appears to be the main controller of phytoplankton stoichiometry in the North Atlantic, but temperature and grazing also play key roles across large spatial scales. Although plasticity emerges as an important component of the environmental control of phytoplankton stoichiometry, especially at the seasonal scale in oligotrophic systems, ecoevolutionary processes play a major role in determining basin-scale latitudinal patterns. We show that this complexity can be resolved by recreating key ecosystem features at different spatial and temporal scales. We use a relatively simple empirically parameterized model of plankton physiology, ecology, and evolution. We demonstrate that those models can provide a sufficient mechanistic framework to reconcile and explain apparently conflicting observations.
In our analysis, we have assumed that N is the main limiting factor to phytoplankton growth. Under this simplifying assumption, we obtained a good, qualitative agreement with data from the North Atlantic, where this assumption holds for the majority of phytoplankton (15). We note that further investigations of the role of other mechanisms will be required to achieve more quantitative predictions regarding the determination of phytoplankton stoichiometry in the North Atlantic. For example, the contribution of N 2 -fixing populations to biomass production and stoichiometry in the subtropical and tropical regions (33,34) or the limitation by iron and phosphorus in the polar and subpolar North Atlantic (15,(35)(36)(37). More generally, while the mechanisms we propose here may be generalizable to other biogeochemically similar regions of the oceans, they will be less likely to hold in the many regions where other environmental factors, such as light, iron, or phosphate availability, become predominant limiting factors to phytoplankton activity. For example, a recently published dataset describes the latitudinal variation of environmental drivers and stoichiometry of the organic matter in the South Pacific (38). These data show a markedly different latitudinal relationship between N availability and C:N stoichiometry, probably as a result of phytoplankton biology being controlled by environmental drivers specific to the region-most likely iron limitation, as suggested by in situ experiments of enrichment (15), observations (35)(36)(37), and modeling studies (39). Limited to the influence of N availability and temperature on phytoplankton physiology, our modeling framework cannot, in its current form, accurately simulate those types of systems. The generality and explanatory power of our approach can nevertheless be improved. Indeed, additional colimitation factors (light, other macronutrients such as phosphate, or trace elements such as iron) can, in principle, be implemented in our model to allow accurate predictions of the physiological, ecological, and evolutionary responses of the marine phytoplankton to the variety of environmental regimes to which it is exposed. Although such developments are technically feasible (40), a main obstacle is the lack of experimental data required for the parametrization of mechanistic models-for instance, regarding the influence of iron on phytoplankton physiology. We therefore think that further characterization of phytoplankton physiology in laboratories should complement the on-going improvement of ocean biogeochemical monitoring to achieve a better understanding of oceanic ecosystem function. Additionally, we are convinced that general circulation models (GCMs) accurately simulating the various environmental characteristics of oceanic ecosystems (e.g., light, macro-and micronutrient availabilities, temperature, pH, oxygenation, and water stratification) and their variations through time and space would provide an ideal framework to further investigate how environmental drivers other than N availability and temperature, but also the combination of those factors (e.g., colimitation of phytoplankton physiology by N, P, and Fe availability), influence phytoplankton stoichiometry in the North Atlantic and in the global ocean overall.
As the development of these "organisms-to-ecosystems" models goes on, their integration in GCMs will give access to predictions of a more quantitative and general nature than those presented in the present analysis. Moreover, such coupled models will allow addressing, quantitatively, the question of the effect of climate change on phytoplankton stoichiometry at the planetary scale (41,42) and to predict more accurately the evolution of the carbon pump efficiency and of its role in mitigating climate change. Although previous analyses (42) of GCM results concluded that considering phytoplankton stoichiometric plasticity only changes marginally the global predictions of models, we think that this conclusion should be reevaluated in the light of the findings presented here.
Materials and Methods
Model. We consider an idealized model of a marine plankton ecosystem, characterized by a concentration in N (N) and an arbitrary number of phytoplankton (P i ) and zooplankton (Z j ) populations. Each of those populations is characterized by a specific set of ecophysiological traits. N is advected into the system at a rate I (in per day) and at a concentration N 0 (in micromoles of N per liter). It is then consumed by phytoplankton, which are themselves grazed by zooplankton. Nutrients are taken up by phytoplankton cells at a rate V, the biomass specific uptake rate of the phytoplankton (in mole of N per mole of organic carbon per day). Once taken in, nutrients enable phytoplankton growth μ (in per day). and where V max and μ max are the maximum uptake and growth rates, respectively, and K and κ the two half-saturation constants. A widely used approach to modeling phytoplankton systems, based on the Monod model of growth, is to consider phytoplankton growth and nutrient uptake to be coupled and to use the Redfield ratio as a scaling factor, therefore assuming a fixed biomass stoichiometry (43,44). Quota models (45,46) instead consider those two mechanisms as uncoupled nutrients being stored inside the phytoplankton cells before being used for growth. Based on previous work (19,47), we use Monod-like formulations to describe both uptake and growth (see a detailed presentation of the model in the SI Appendix, Supplementary Discussion SI1). The stoichiometry of the phytoplankton cell of a population i is then described by its intracellular nutrient quota Q i (in mole of N per mole of organic carbon), determined by the dynamical balance between uptake and growth. Therefore, in the following equation: and ranges between the Q min and Q max , which respectively are approached when nutrients are scarce and in excess (SI Appendix, Supplementary Discussion SI1). It is well established that these physiological parameters are primarily correlated to organism size (10,11,19,(48)(49)(50)(51)(52). This size dependence is typically described by power-law relationships of the type pðxÞ ¼ a Á x b , with p the parameter value and x the organism's volume. Similarly, the kinetics of the phytoplankton metabolism typically increases with temperature (48), which is often described in plankton models by the Norberg-Eppley relationship p T ð Þ ¼ pðT ref Þ Á e αðTÀTref Þ , with p the value of a given parameter-here uptake or growth-α the temperature exponent, and T ref a reference temperature (12,13). By implementing those temperature and size dependencies into the parametrization of the model (detailed in the SI Appendix, Table S1), we can evaluate analytically for a given environmental forcing of the ecosystem (i.e., for a given set of I, N 0 , and temperature T); the range of viable phytoplankton cell sizes; and, for a single phytoplankton population within that range, the corresponding ecological attractor in terms of nutrient concentration, biomass, and C:N stoichiometry at equilibrium (the solving method and some biological implications are described in details in the SI Appendix, Supplementary Discussion SI1).
In addition to a basal mortality rate d (in per day), phytoplankton populations experience the grazing pressure from zooplankton populations, which increases with zooplankton size (53)(54)(55) and, with how well the size of a specific phytoplankton population fits the size preference of the zooplankton, the grazing pressure being maximized for a specific predator-to-prey size ratio (56). The emergence of the size distribution of the phytoplankton and zooplankton guilds is, therefore, the result of the community ecology (i.e., the variation of the abundance of the phytoplankton and zooplankton populations resulting from the competition and predation interactions between those populations) but also of adaptive processes. Indeed, each time new individuals are produced, mutations occur, and some of those individuals are characterized by sizes that are slightly different from that of their ancestor. Depending on their fitness, those mutants can either die without decent or reproduce and lead to the emergence of a new population (13,14). Using the method described in Sauterey et al. (17), we resolve the ecoevolutionary dynamics of the ecosystem and its equilibrium characteristics in terms of nutrient abundance (R Ã N ) and in terms of (phytoplankton and zooplankton) plankton density and size composition. A typical prediction of this type of model is that size-directed grazing drives the diversification of the plankton community, while the total number of coexisting populations is limited by resource availability (17,57). | 2021-12-25T06:16:25.740Z | 2021-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "bb55694ce65444909afa5052bca96269116b85bb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.pnas.org/content/pnas/119/1/e2114602118.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "db7ddcfa78f6d0b487d8bb16c6300e1588639d34",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51925831 | pes2o/s2orc | v3-fos-license | The impact of dialysis on critically ill elderly patients with acute kidney injury: an analysis by propensity score matching
ABSTRACT Introduction: Aging is a global phenomenon. Recent forecasts indicate that Brazil will be the sixth country in population of elderly individuals in 2020. The incidence of acute kidney injury (AKI) among the elderly varies, but studies have indicated that older individuals are more prone to developing AKI and have higher mortality rates than the general population with renal disease. The impact of dialysis in elderly patients with AKI - and critically ill individuals with multiple dysfunctions - has been discussed for years. Evidence indicates that for this group of patients dialysis does not positively impact survival and, in some situations, it might even accelerate death. This study investigated a population of elderly individuals with AKI seen in intensive care units to assess, through Propensity Score Matching, the impact dialysis has had for them. Methods: Data from the charts of patients aged 60 years or older seen at the intensive care unit of a general hospital between January 2012 and December 2014 and diagnosed with AKI were collected. Results: The study included 329 patients with a mean age of 75.4 ± 9.3 years. Ischemic AKI was the most prevalent disease (54.7%) and 28.9% of the patients needed dialysis. No difference was seen in the death rates of dialysis and non-dialysis patients aged 70+ years. Conclusions: The data suggested that dialysis did not seem to impact the death rates of critically ill patients with AKI aged 70+ years.
IntRoductIon
Life expectancy has grown steadily all over the world. Moreover, the advancement of medical science has enabled elderly patients with severe disease to survive for longer. This scenario, nonetheless, begs the question as to whether advanced life support truly impacts the progression of elderly patients on intensive care or if it simply introduces additional suffering to the final period of one's life. 1,2,3,4 Dialysis is one of the therapies prescribed to support the lives of patients in critical condition by enabling the establishment of a more suitable metabolic and nutritional state to individuals with acute kidney injury (AKI).
However, dialysis entails risks and complications such as the ones related to implanting venous catheters, hemodynamic instability, changes in antibiotic levels, and bleeding on account of heparin administration. In recent years, the impact of dialysis on the progression of patients with AKI has been discussed. Particular attention has been devoted to the elderly, a group in which organ dysfunction and comorbidities are seen more frequently. 2,3,5 AKI is a common finding in hospital settings strongly associated with increased mortality. Prevalence may be as high as 50% in critically ill patients. 6 Advanced age is a known risk factor for AKI. Feest et al. reported up to eight-fold increases in the prevalence of AKI in patients aged 60+ years. 7 In addition to increased prevalence, evidence indicates that advanced age is a risk factor for death and permanent loss of renal function requiring the prescription of chronic dialysis. 1,2,8,9 In this population, the decision to start dialysis is based on general clinical findings such as signs and symptoms of AKI, and may not take into account the risks inherent to the procedure, the desire of the patient and of his or her family, and overall quality of end of life. 3,4 Despite the progress seen with the use of hemodialysis in intensive care settings, the impact of dialysis on elderly patients with multiple comorbidities or organ failure (prescribed vasoactive drugs or mechanical ventilation) has been discussed for years. 7,8 Apparently, dialysis does not improve patient survival and, in some situations, may even accelerate death or increase end-of-life distress. 4,5 However, there is little evidence on the matter in the literature.
This study aimed to assess the impact of dialysis in the survival of elderly patients with AKI on intensive care.
methods
This retrospective cohort study included data collected from the electronic charts of patients diagnosed with AKI during hospitalization at the intensive care unit of the Santa Casa de Misericórdia de Maceió, AL, Brazil, from January 2012 to December 2014. The Santa Casa is a reference hospital located in the capital city of the State of Alagoas, Brazil. The nephrologists present in the hospital are involved in the everyday care and assessment of renal patients.
Patients with 60+ years of age diagnosed with AKI according to the KDIGO criteria 10 were included in the study. Three hundred and eighty-two charts were reviewed, and 329 met the inclusion criteria. Patients without sufficient clinical or workup data were excluded. The study was initiated after the approval of the Research Ethics Committee of the Universidade Estadual de Ciências da Saúde de Alagoas (UNCISAL) (protocol no.: 62798216.3.0000.5011).
The patients enrolled in the study were characterized according to age, sex, etiology of AKI (ischemic, nephrotoxic, obstructive or mixed) and other variables such as occurrence of septic shock (acute circulatory failure caused by infection), oliguria (urine output below 400ml in 24 hours), need for mechanical ventilation (MV) and vasoactive drugs (VAD), use of diuretics, dialysis, and hospital death (at the ICU or in the hospital wards after discharge from the ICU). Comorbidities were assessed based on the Charlson Comorbidity Index. The number of days between the first alteration in creatinine levels and assessment by a nephrologist was also analyzed (∆T nephro). The following lab variables were assessed: creatinine, urea, potassium, and complete blood count.
Baseline creatinine was set as the lowest level found during hospitalization. Only traditional hemodialysis (HD) was offered, with session times ranging from two to four hours, blood and dialysate flow rates set at 300 and 500 ml/min respectively, using polysulfone capillary filters with a surface area of 1.8 m² and an ultrafiltration coefficient of 7.5 ml/h/mmHg. Dialysis was interrupted in cases of severe hemodynamic instability after the start of the procedure. Indications for dialysis included: azotemia with uremic symptoms (usually with urea levels > 150 mg/dL), oliguria refractory to diuretics (urine output < 400 ml in 24 hours), hyperkalemia refractory to drug therapy (K+ > 6.5 mmol/L), hypervolemia and metabolic acidosis (pH < 7.2 and serum bicarbonate < 16 mEq/L in arterial blood). The patients were divided into age ranges (60 to 70; 70 to 80; and 80+ years) for analysis.
Numerical variables were expressed as mean values ± standard deviation (SD) or median values and interquartile ranges depending on whether they followed a normal distribution. The associations between continuous variables were measured through Student's t-test or ANOVA, while categorical variables were analyzed with the chi-squared test. Variables eliciting significant differences in univariate analysis were tested with logistic regression for independent associations with death (Enter method). In order to study the impact of dialysis on death, the patients were divided into two groups based on whether they underwent dialysis and were matched for the main variables that might have an effect on death, such as use of vasoactive drugs, mechanical ventilation, oliguria, KDIGO, and others. Statistical method Propensity Score Matching was used. The level of significance was set at α = 0,05 with a 95% confidence interval. Statistical analysis was performed on SPSS (version 23).
Results
The study included 329 individuals. General, clinical, and workup data are described on When the patients were divided based on age ranges, the individuals aged 70-80 and 80+ years were found to share many traits. The differences observed in some variables were more pronounced when the comparison was made against the patients in to the younger age group (60-70 years, see Table 2). For example, the Chalrson Comorbidity Index and need for mechanical ventilation were higher among patients aged 70-80 and 80+ years when compared to the individuals in the 60-70 year age group. The individuals aged 60-70 years had significantly higher baseline creatinine levels than the patients aged 70-80 and 80+ years (p = 0.05). The same happened to peak creatinine. In terms of etiology of AKI, patients aged 70+ years had more ischemic AKI, while individuals aged 60-70 years had more multifactorial AKI (p = 0.01). The distribution of KDIGO categories and mortality were similar between the three age ranges. Fewer patients in the group aged 80+ years were prescribed dialysis, regardless of their KDIGO classification (only 18.4% of the group vs. 36.2% of the individuals aged 60-70 years and 31.4% of the patients aged 70-80 years; p = 0.01). The indication of dialysis was analyzed vis-à-vis the KDIGO classification. In KDIGO 3 individuals, in which dialysis is more likely to be indicated, fewer patients aged 80+ years were prescribed dialysis (only 28% vs. 46.2% of the patients aged 60-70 years and 41.8% of the patients aged 70-80 years; p = 0.06). The death rate of KDIGO 3 patients aged 70+ years requiring mechanical ventilation and vasoactive drugs was 93.5% (29). Of the 29 patients who did not survive, 55.2% were on dialysis. The death rate of KDIGO 3 individuals aged 80+ years requiring mechanical ventilation and vasoactive drugs was 100% (26). Thirty-eight percent of the 26 patients underwent dialysis. On account of the similarities between the groups aged 70-80 and 80+ years, the logistic regression model and Propensity Score Matching (PSM) were applied to individuals aged 70+ years. In the logistic regression model (Table 4), the variables independently correlated with death were septic shock (OR = 3.97; CI = 1.15, 13.59; p = 0.02), need for mechanical ventilation (OR = 4.48; CI = 1.82, 11.02; p = 0.001), and use of vasoactive drugs (OR = 4.53; CI = 1.84, 11.16; p = 0.001). The other variables tested with logistic regression were not associated with mortality.
In regards to PSM (Table 5), in the group of 171 patients not offered dialysis PSM found 54 matches (controls not submitted to dialysis) for the 57 patients aged 70+ years who underwent dialysis. After the groups were matched (dialysis and non-dialysis patients) for the main variables correlated with death, no significant difference in mortality was observed between the patients offered dialysis and the individuals treated conservatively.
dIscussIon
This study comprised data taken from a population of elderly individuals, most with severe forms of AKI (48% on KDIGO 3) and critically ill (71.7% on mechanical ventilation and 48% taking vasoactive drugs).
In recent years, there has been discussion as to whether advanced life support (mechanical ventilation, vasoactive drugs, dialysis) should be offered to elderly patients with multiple comorbidities or organ dysfunction. The reasons for the debate include the elevated death rates observed in patients fitting this profile despite the availability of advanced life support and dialysis. [11][12][13] Furthermore, there is little data in the literature -and particularly a lack of publications reflecting the situation in Brazil -on the impact of dialysis on elderly patients on intensive care. Although dialysis may help patients maintain homeostasis until the recovery of renal function, the procedure has its risks (complications related to the vascular access, hemodynamic instability, bleeding caused by anticoagulant therapy). This is why the impact of dialysis on the death of critically ill patients has been discussed in recent years, with authors reporting worse outcomes in individuals offered dialysis than in patients managed conservatively, even when they were controlled for other factors related to mortality. 14,15 In this study, the patients aged 70-80 and 80+ years shared many characteristics (comorbidity index, need for mechanical ventilation and vasoactive drugs) and were analyzed together for the risk factors leading to death and for the impact of dialysis on their outcomes. As shown in previous studies, need for mechanical ventilation, use of vasoactive drugs, and septic shock were independent risk factors for death in our group of patients. 16,17 The study also showed that individuals aged 70+ years categorized as KDIGO 3 on mechanical ventilation and vasoactive drugs had a death rate of 93.5%, while 100% of the patients aged 80+ fitting the same profile died in spite of dialysis. With this data in mind and in order to analyze the influence of dialysis on individuals aged 70+ years, the patients were divided into two groups based on whether they underwent dialysis; an attempt was also made to match patients for the main variables correlated with death. Patients were matched based on Propensity Score Matching, a statistical method often used in retrospective clinical trials for its effectiveness in controlling for variables between analyzed groups. 18 In PSM, patients on dialysis and individuals treated conservatively were matched for variables independently associated with mortality in our sample (Tables 4 and 5). After controlling for these variables, similar death rates were observed between patients treated conservatively and individuals offered dialysis. This finding suggested that dialysis did not have any impact on the mortality of this subgroup of patients (critically ill patients aged 70+ years). ICUs are filled with elderly patients, and everyday physicians are faced with situations in which dialysis might be prescribed if only workup parameters were considered. However, many patients are aged 70+ years and have decreased renal functional reserve and other organ dysfunctions as a result of senescence; and in addition to AKI, many need invasive ventilation and are hemodynamically unstable.
It should be noted that very few patients aged 80+ years were prescribed dialysis, even when they were categorized as KDIGO 3, when compared to individuals in the other age groups (28% vs. 46.2% of the individuals aged 60-70 years and 41.8% of the patients aged 70-80 years) as previously reported in the literature. 19 A probable explanation for this finding is the higher occurrence of comorbidities in this group or the desire of the patients or their families not to submit to invasive procedures at the end of life. However, since the families of non-dialysis patients were not interviewed, this explanation is merely the fruit of speculation. The lack of clinical conditions to perform dialysis might have been the actual reason for it. Considering that previous epidemiological studies reported elevated death rates among critically ill elderly patients with AKI, in some countries the idea of offering time-limited trials of dialysis 20 is gaining strength. In it, dialysis is started and the patient is observed for a few days for global clinical progression (hemodynamic patterns, renal function, and other organ dysfunctions). When none of the parameters improve in one or two weeks, the procedure is suspended. It should be noted that in the absence of national consensus statements on the matter, the decision to suspend dialysis must be made with the agreement of the patients or their families, the assisting physician, and the nephrology and intensive care teams.
A relevant point derived from the findings published in this study is that the information discussed here may provide additional input to patient families and physicians and further inform their discussions based on recent scientific evidence on the degree of support that should be offered to elderly patients on intensive care, an ever present reality in ICUs all over the world. 21 These discussions help decrease the number of cases of dysthanasia, in which life is prolonged without consideration to quality of life. Another relevant fact is that, to our knowledge, this was the first Brazilian study to assess the impact of dialysis on the care of elderly individuals controlling for variables correlated with death using Propensity Score Matching.
Although the hospital in which data was collected is a reference center in our State, one of the limitations of this study was the fact that it only enrolled patients from one center. Therefore, its findings cannot be generalized to other populations. Other limitations include the retrospective nature of the study and the small size of the included patient sample. conclusIon A low proportion of patients aged 80+ years underwent dialysis, possibly on account of external factors such as the desire of the patients or their families. The main risk factors for death were septic shock, use of vasoactive drugs, and mechanical ventilation. In individuals aged 70+ years, dialysis did not reduce mortality. | 2018-08-14T19:12:27.365Z | 2018-08-02T00:00:00.000 | {
"year": 2018,
"sha1": "51bd965962745f12a0bf996621c880ff723f34d2",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/jbn/v41n1/2175-8239-jbn-2018-0058.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51bd965962745f12a0bf996621c880ff723f34d2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253957890 | pes2o/s2orc | v3-fos-license | Does the toxicity of endocrine therapy persist into long-term survivorship?: Patient-reported outcome results from a follow-up study beyond a 10-year-survival
Background Endocrine treatment (ET) is a highly effective breast cancer treatment but can distinctly impair breast cancer patients’ quality of life (QOL). In a patient-reported outcome (PROs) study conducted by the authors in 2011, patients reported higher ET-induced symptom levels than known from the registration trials, and was underestimated. Based on these study results, we investigated the long-term sequelae of ET reported by breast cancer survivors (BCS) in a follow-up study conducted 5–10 years after an earlier assessment. Methods BCS who had participated in the earlier study (n = 436) were approached for study participation either at one of their routine follow-up appointments or via mail; consenting patients were asked to completed the same PRO assessment used in the original study (FACT-B + ES). BCS with relapse/ progressive disease were excluded from the analysis. We compared long-term endocrine symptomatology and overall QOL outcome (i.e. FACT-G and -ES sum score). Results A final sample of 268 BCS was included in the analysis. BCS reported a significant improvement of the overall endocrine symptomatology (baseline mean = 59 vs. follow-up mean = 62, p < 0.001), physical (baseline = 23.9 mean vs. follow-up mean = 24.8, p < 0.01) and functional well-being (baseline mean = 21.7 vs. follow-up mean = 22.7, p = 0.013) and overall QOL (mean baseline = 88.3 vs. mean follow-up = 90.9, p = 0.011). However, the prevalence of particular symptoms, well-known to be ET induced, did not change over time such as joint pain (baseline = 45.5% vs. 44.2%, n.s. difference), lack of energy (36.4% vs 33.8%, n.s. difference), weight gain (36.8% vs. 33.9%, n.s. difference) or vaginal dryness (30.2% vs. 33%, n.s. difference) and the proportion reporting lack of interest in sex increased (40.4% vs. 48.7%, p < 0.05). Conclusion Presented results indicate that BCS recover well in terms of overall endocrine symptomatology and quality of life but experience some clinically relevant and unfavorable ET-related long-term effects.
Introduction
With constantly increasing survival rates over the last decade, the group of long-term breast cancer survivors (i.e. permanent survivorship according to ASCO, www. cancer. net) has been expanding. Personalized treatments such as endocrine therapy (ET) applied for multiple years after initial treatment make a distinct contribution to these increased survival rates. More than 75% of women diagnosed with breast cancer would receive at least 5 years of ET as part of their treatment [1,2]. Though increasing survival, women are undergoing these highly effective treatments at the cost of an (potential) enduring impairment of their quality of life (QOL) [3,4]. Hot flashes, joint pain, sexual problems or emotional instability are among the most prevalent ET side effects challenging patients' QOL [5,6]. Some evidence claims these ET treatment side effects and QOL impairments to occur not only during treatment but to persist after treatment completion far into the survivorship stage [7][8][9].
Albertini Carmen, Oberguggenberger Anne have contributed equally to this work.
Hence, survivorship issues such as QOL including not only physical but also psychosocial recovery in the long-term gain importance when it comes to comprehensive survivorship care [10][11][12][13]. An essential step in this regard is the systematic identification of ET long-term sequelae most detracting to breast cancer survivors (BCS) including the patient's subjective experience. For this purpose, patient-reported outcomes (PROs) have been proven to give comprehensive insight into the patient's physical and psychosocial health complementing provider-generated information [14]. In a study called PRO-BETh (PROs in Breast cancer patients undergoing ET), performed 2009-2011, the authors were able to demonstrate the value of PROs for the understanding of ET treatment toxicity [14,15]. Evidence generated by this study suggested high rates of ET-induced toxicity for both, pre-and postmenopausal women. The prevalence of most side effects observed in this "real-life" study (i.e. a sample within routine after-care) significantly exceeded those reported by the original registration trials [16][17][18]. Joint pain, hot flashes, loss of interest in sex and lack of energy were the most prevalent symptoms reported by patients. In order to gain more insight into the long-term sequelae of ET, the authors conducted a follow-up study to the research project PRO-BETh.
The main aim of this follow-up study was the determination of patient-reported ET-associated toxicity and QOL outcomes in BCS 5-10 years after the initial assessment.
PRO-BETh study description
The original PRO-BETh study [14,15] was designed as a cross-sectional observation study targeting on the assessment of prevalence and severity of ET-induced side effects from a subjective patient perspective. For this purpose, BC patients undergoing up-front ET with either AIs or tamoxifen (with or without Zoladex) at the time of assessment completed a comprehensive PRO-battery on QOL including physical side effects and psychosocial burden. Reported symptom prevalence rates were compared to data derived from pivotal phase IV trials (ATAC 2005, BIG1-98 [16]. Overall, PROs resulted in significantly higher prevalence rates as compared to physician ratings for most symptoms published in pivotal clinical trials. The authors concluded that ET toxicity seems to be underestimated in clinical routine care. Please find further study details in the respective publications [11,12].
Sample
All BC patients who had participated in the original study were eligible and approached for participation in the followup assessment. Contact data were taken from the medical records of the Department of Gynecology and Obstetrics at the Medical University of Innsbruck. Inclusion criteria for this study were defined as followed: • Participation in the initial PRO-BETh study Breast cancer survivor having undergone endocrine treatment -defined as patient who had completed the primary treatment (maintenance treatment can be ongoing) by the EORTC Cancer Survivorship Task Force [19] • No overt cognitive impairment • Written informed consent • Fluency in German
Procedure
Following the recruitment procedure of the original project, the data assessment was conducted at the outpatient unit of the Department of Gynecology and Obstetrics at the Medical University of Innsbruck.
Breast cancer survivors (BCS) were approached for study participation either at one of their routine follow-up appointments (in Austria, BC patients have lifelong routine checkups at the primary care center) or via mail after an introductory telephone call explaining the study purpose. Patients completed written informed consent. In case of consenting to study participation, BCS completed the same PRO assessment used in the original study (see below). Patients returned the questionnaires pseudo-anonymized (ID indicated by a number) in an envelope either via mail or personally at the outpatient unit (paper-pencil assessment). Clinical data for participants were derived from the medical records.
PRO instruments
The original questionnaire battery included the Functional assessment of cancer therapy-breast (FACT-B) and Functional assessment of cancer therapy-endocrine subscale (Fact-ES). The Functional Assessment of Cancer Therapy-Breast and Endocrine Subscale (FACT-B + -ES) consists of 36 items assessing QOL in BC patients. The questionnaire uses a five-point Likert scale and relates to the FIM framework for the past seven days. The answer format ranges from 0 (not at all) to 4 (very much). The maximum scoring for general well-being ranges from 0 to 108, for emotional well-being from 0 to 24 and for physical and functional wellbeing from 0 to 28. High values indicate a good QOL. The FACT-B is supplemented by the endocrine subscale (FACT-ES), which measures symptoms and side effects related to ET for breast cancer such as hot flashes, joint pain and loss of libido [3]. The FACT-ES comprises 19 items. Further details have been published elsewhere [14,15].
Statistical analysis
Sample characteristics are described using absolute and relative percentages, means and standard deviations.
Primary analysis: In order to investigate long-term ET toxicity, we analyzed the FACT-B + -ES on single item level following the analysis of the original study. i.e. we compared FACT-B + -ES data of each patient from the first assessment to her data at the follow-up assessment. We present the prevalence of patient-reported physical and psychological symptoms related to ET (derived from the FACT-B + -ES) as percentages and 95% confidence intervals for baseline and follow-up time points. Symptom frequency was calculated by summarizing percentages of patients selecting the categories 'somewhat', 'quite a bit' and 'very much' on single item level of the FACT-B + -ES. Confidence intervals were calculated using the modified Wald method [20]. The Sign Test was used to compare symptom frequencies between the two assessment-time points. We further aimed at the clarification of the impact of age on symptoms. For this purpose, age was considered a relevant covariate already at the first assessment with a continuous effect on the outcome. We hence were interested in the impact of age on symptom change over time rather than assessing its effect at the follow-up assessment only. For this purpose, we calculated the difference between the first-and follow-up assessment for the FACT-B + -ES items and compared age groups (< 50, 50-59, 60-69 and > 70 years) for this difference using the Kruskal-Wallis Test.
Secondary analysis: For the investigation of overall longterm QOL outcome (i.e. FACT-G and -ES sum score), we used a mixed linear model. In this analysis, the dependent variables were log-transformed to obtain normal distribution. Time point (first assessment, follow-up) was included as a fixed effect and post-hoc, we conducted pairwise comparisons between time points and tamoxifen-vs. aromatase inhibitor treatment (with Bonferroni-correction for multiplicity). To assess the association of age with change over time we included the two-way interaction age-by-time point in the model. To account for correlations between repeated measurements, we used a first order autoregressive covariance matrix. P-values below 0.05 were considered statistically significant. All analyses were done with SPSS 22.0.
We obtained ethical approval for this follow-up project from the Ethics Committee of the Medical University of Innsbruck (Innsbruck, 22.04.2017/Ah).
Sample
From the 436 patients on endocrine treatment originally surveyed in 2009-11, 27 patients (6.2%) were deceased, this corresponds to an OS of 93%. A total of 290 breast cancer long-term survivors participated in the follow-up study. The remaining 119 patients did either not agree to fill out questionnaires because of personal reasons (11.9%) or could not be contacted due to logistic reasons (15.4%). Hence, a response rate of 70% could be achieved. Among the patients in the final analysis, a total of 8% reported a relapse (3.4% in the AI group and 4.5% in the tamoxifen group). We excluded those patients from the further analysis to provide group homogeneity. Hence, we report data of a final sample of 268 BCS. Please find details in the flow chart below (Fig. 1).
Patients participated after a median follow-up period of 8 years (range 6-9 years; mean = 8.02). At the time of the follow-up assessment, patients were aged 65 years on average and 90% were postmenopausal. Patients who had received tamoxifen were significantly younger than patients with AI therapy (p < 0.001) as tamoxifen has been the firstline ET for premenopausal patients at the time the original study was performed (and AIs for postmenopausal patients). Details on clinical data are presented in Table 1.
Changes of ET-related toxicity
We observed a significant improvement of the overall endocrine symptomatology in the long-term (FACT-ES baseline mean = 59 vs. follow-up mean = 62, p < 0.001); this significant improvement was found for both, patients who had received tamoxifen (FACT-ES baseline mean = 58.4 vs. follow-up mean = 61.2) as well as those with previous AI treatment (FACT-ES baseline mean = 59.7 vs. follow-up mean = 62.7).
In detail, vasomotor symptoms including hot flashes and cold/night sweats decreased significantly with time in both groups. In contrast, gynecologic symptoms did not change over time except for vaginal discharge, which decreased significantly; loss of interest in sex even increased in longterm in percentages. Interesting to note, of the overall sample 9.3% did not complete both questions on sexuality and 20.7% answered only one of both questions (i.e. pain with intercourse and loss of interest in sex) at the second assessment. No significant difference was observed for gastrointestinal symptomatology (p > 0.05 for all gastrointestinal symptoms); a total of 38.4% and 30.4% of patients reported weight problems in the tamoxifen and AI group, respectively, at follow-up. Finally, the typically ETrelated symptoms joint pain, lack of energy and mood swings were highly prevalent at the follow-up assessment time point. Details are presented in Table 2, 3, 4.
Regarding the effect of age-independent from the treatment received-on symptom change over time, we found no difference for most symptoms across age groups (results not shown) with the exception of vaginal discharge (p < 0.001), headaches (p = 0.023) and mood swings (p < 0.001) (details in Table 5).
QOL outcome
Overall, QOL according to the FACT-global score was significantly higher in long-term BCS compared to QOL in patients on ET treatment (mean baseline = 88.3 vs. mean follow-up = 90.9, p = 0.011). This was true for patients who had received tamoxifen (mean baseline = 89.3 vs. mean follow-up = 92.8) as well as those with previous AI treatment (mean baseline = 87.5 vs. mean follow-up = 89.5). However, in terms of clinical relevance the improvement seems to be minor [21].
BCS reported significantly higher levels of physical wellbeing (FACT-physical well-being baseline = 23.9 mean vs. follow-up mean = 24.8, p < 0.01) and functional wellbeing (FACT-functional well-being baseline mean = 21.7 vs. follow-up mean = 22.7, p = 0.013) than patients on ET treatment. For functional well-being, we observed a trend
Discussion
While previous evidence suggests that ET-associated toxicity is high and distinctly impairs patient QOL during intake [14,22] we lack evidence on the patients' experience of these symptoms in the long run, in particular after ET termination. In this paper, we aimed at shedding light to the physical and psychosocial long-term outcome after ET in BCS from a patient perspective.
Overall, BCS experienced a decrease of the overall ETrelated symptomatology in the long-term (as indicated by the overall score of the FACT-ES subscale). In particular, the vasomotor symptomatology-a major side effect of ET-seems to decrease over time significantly. In addition, patients reported a small but significant increase of the overall physical and functional well-being score as well as their general QOL over time. This observation is consistent with the results of Schmitt et al. demonstrating improvement of physical-and role-functioning in-between 5 years after the end of cancer treatment and even exceeding levels of an aged-matched healthy population [23,24]. Others suggest levels of overall QOL in BCS to be comparable to those of a population without a previous cancer disease [7,8]. We might hence conclude that there is some sort of stabilization of an overall symptomatology and QOL in BCS. Consequently, we observed a lack of recovery when it comes to specific ET-related symptoms. Several symptoms seem to persist at high levels: Joint pain, loss of interest in sex or weight gain as well as lack of energy have been indicated as the most prevalent long-term follow-up problems for BCS in our study. Our results complement existing evidence illustrating specific cancer treatment-related symptoms to challenge patients in long-term. For instance, Haidinger [9] and others [25] observed high levels of joint pain in BCS after ET termination. Van Leuuwen (2018) identified joint pain among the chronic symptoms highly relevant and burdensome for cancer survivors when asking patients to quote QOL topics relevant for their cancer survivorship [19]. Evidence for the persistence of fatigue and lack of energy as among the most prevalent long-term sequels of a cancer disease is robust [7]. Weight gain is a well-known problem related to ET [26]. Particularly, patients receiving tamoxifen (i.e. younger patients) continue to struggle with their weight over years [26]. In the study presented herein, more than one third of both, AI-patients and TAM-patients, reported persistent problems with weight gain. Potentially resulting in obesity, weight gain is not only a problem for the subjective overall well-being, body image or feeling of attractiveness but mediates disease control and clinical outcome [27]. Moreover, this study again proves the impact of ET on sexual health in BCS. We observed not only a lack of recovery of interest in sexuality in long-term but even a tendency towards symptom deterioration in the "younger" (originally premenopausal) patient group. The same was true for vaginal dryness in premenopausal patients. This is in line with two recent meta-analyses highlighting a high prevalence of female sexual dysfunction in BCS [28,29]. The authors reported recently that about 70% of BCS complain clinically relevant sexual dysfunction [30]. In addition, 10% of the participants had not answered the two questions about sexuality and 20% only answered one of the two questions. This observation supports the notion that the topic of sexuality is still a taboo in clinical care patients are reluctant to talk about [31][32][33]. Sexuality is a complex issue influenced by numerous bio-psycho-social factors and underlies natural changes over the life span: For instance, age or menopausal status are well-known to affect interest in sexuality or libido [34,35]. In this study, we were not able to clearly isolate an independent effect of factors potentially contributing to the patients' sexual outcome as we lack a baseline assessment of QOL before the start of ET. However, in view of more than 50% of BCS on-and off-treatment indicating sexual impairments, sexuality should be considered as a major, persistent care demand relevant for BC (survivorship) care.
With regard to the psychological domain, BCS reported no change over time. Established evidence supports the notion that psychological issues continue to be high far into the survivorship stage while patients recover physically [9]. Mood swings -though decreasing over time -can be a continuing problem at least for about a third of the younger patients as observed in this study. Other studies [7,9] also describe comparable results with regard to emotional wellbeing. In particular, young BC patients need to meet the challenges of a cancer diagnosis in the middle of their work life, educational stage or during the phases of family planning; life plans need to be postponed or can finally not be achieved, thereby, requiring (psychological) adaptation to new requirements and are psychologically challenging. (Irreversible) hormonal changes can-as induced by ET-might aggravate the emotional challenges; the latter was not investigated in this study but will be the topic of further studies.
In conclusion, a distinct proportion of BCS experiences a chronification of specific symptoms after treatment completion. These health impairments can significantly interfere with a management of daily life independently from others thereby having a profound impact on patient QOL. The persistence of some ET-related physical and emotional symptoms should not be underestimated-this is true also beyond the actual intake of ET.
Limitations
A limiting factor for the interpretation of study results is the lack of knowledge of QOL outcome in the non-participant group. Though we have a very satisfactory response rate of 70%, a potential selection bias towards a worse OR better QOL cannot be excluded. Furthermore, a direct comparison of data with an age-matched population without a cancer disease would have enriched the interpretability of data on QOL outcome. Data from such a reference sample could help identify other factors modulating QOL over a span of almost a decade beside the cancer diagnosis and related treatments e.g. natural menopause and age are wellknown to have an effect on interest in sex or mood swings independently from cancer. The investigation of the independent effect of age on QOL outcome was further limited in the presented analysis due to the following: The type of ET prescribed originally had been based on patients' menopausal state (i.e. tamoxifen for premenopausal patients and aromatase inhibitors for postmenopausal patients), so that age is an immanent factor related to the type of treatment (i.e. a high inter-correlation of the covariates age and type of treatment). An independent effect of age is therefore difficult to obtain. Finally, the sample heterogeneity in terms of treatment duration and time since ET termination to the followup assessment limits the interpretation of results. Clearly, a longitudinal design with a baseline assessment before the start of ET and a more homogenous sample at the first and second assessment would have contributed to a more accurate picture of the true extent of long-term toxicity caused by adjuvant ET-this limitation from the original study persists to the follow-up assessment. However, presented results clearly indicate that BCS experience unfavorable long-term effects that need to be better understood and should be subject to further research.
Clinical implication
Our results are of importance for clinical survivorship care: Women after ET seem to recover well overall when it comes to QOL issues. However, they still suffer from particular health impairments presenting a high potential for QOL limitations. Most persistent problems seem to be sexual health issues, psychological demands and joint pain. Survivorship care efforts should focus on these problems. This includes the provision of more information on long-term sequel of breast cancer and ET in patient education, a systematic assessment of the respective symptoms at after-care visits, and the integration of targeted, supportive treatment individually tailored to the BCSs' demands in long-term care plans. This might also include the adjustment of ET treatment application towards individual demands. For instance, the SOLE study [36] proved an intermittent administration of Letrozole as a safe and advantageous option in terms of QOL. The option for a treatment interruption might help patients to stabilize their OOL and ultimately better adhere to the treatment regime.
Beside the increase of survival, the prevention of longterm QOL problems should be an ultimate goal for BC survivorship care.
Appendix
See Table 6. Table 6 Physical and psychological symptoms in BCS at the first and the follow-up assessment a Group of patients with the respective symptom at both assessment time points. b Group of patients with the respective symptom at the first assessment only (not reported at the follow-up assessment). c Group of patients with the respective symptom at the follow-up assessment only (not reported at the first assessment). | 2022-11-25T06:17:29.082Z | 2022-11-23T00:00:00.000 | {
"year": 2022,
"sha1": "30aff0e08684b60c32c08a8bc7ab78b480b4f41e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10549-022-06808-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "fdeaebd170e4dab5ba82077a710b426219e9b75a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118425033 | pes2o/s2orc | v3-fos-license | Quantum annealing and the Schr\"odinger-Langevin-Kostin equation
We show, in the context of quantum combinatorial optimization, or quantum annealing, how the nonlinear Schr\"odinger-Langevin-Kostin equation can dynamically drive the system toward its ground state. We illustrate, moreover, how a frictional force of Kostin type can prevent the appearance of genuinely quantum problems such as Bloch oscillations and Anderson localization which would hinder an exhaustive search.
I. INTRODUCTION
The aim of combinatorial optimization is to find good approximations to the solution(s) of minimization problems. Many of the most famous algorithms currently used in this field [1] were inspired by analogies with physical systems. Among them the most celebrated is Thermal simulated annealing [2] proposed in 1983 by Kirkpatrick et al.: the space of all admissible solutions is endowed with a potential profile dependent on the cost function associated to the optimization problem. The exploration of this space is represented by a temperature dependent random walk. An opportunely scheduled temperature lowering (annealing) stabilizes then the walk around a, hopefully global, minimum of the potential profile. The Quantum annealing approach to combinatorial optimization [3, 4], instead, was originally suggested by the behaviour the stochastic process q ν (t) associated [5,6] to the ground state φ ν of a Hamiltonian of the form: where the potential function V encodes the cost function to be minimized. The behavior of q ν is characterized by long sojourns around the stable configurations, i.e. minima of V (x), interrupted by rare large fluctuations which carry q ν from one minimum to another: q ν in thus allowed to "tunnel" away from local minima to the global minimum of V (x). The deep analysis of the semiclassical limit performed in [7] shows, indeed, that as ν → 0 + "the process will behave much like a Markov chain whose state space is discrete and given by the stable configurations". However, the ground state of H ν is seldom exactly known and approximations are required. One of the earliest proposals in this direction, advanced in [4] and applied in [3], was to construct an unnormalized approximation of φ ν (x) by acting on a suitably chosen initial condition φ trial (x) with the Hamiltonian semigroup exp(−tH ν ), namely by solving, with the initial condition φ trial , the imaginary time Schrödinger equation. Similar ideas appear in the chemical physics literature [8] and, with more specific reference to the optimization problems considered here, in [9,10,11]. Yet, the inability to autonomously construct the ground state process, without recourse to the unphysical step of imaginary time evolution, substantially detracts from what is otherwise a physical route to optimization by dynamical evolution toward the ground state.
In this note, encouraged by the progress in quantum annealing in the last twenty years, as reviewed for instance in [12,13,14], by its close relationship with adiabatic quantum computation [15] and by proposals of its hardware implementation [16], we try to eliminate the above unphysical step: we try to implement, instead of imaginary time evolution, the idea of reaching the ground state with the help of viscous friction [17,18,19]. We first introduce the nonlinear Schrödinger-Langevin-Kostin (SLK) equation and illustrate, by means of examples on two toy models, how a frictional force acts in the continuous case. Then we turn to quantum combinatorial optimization and show how dissipation can, in the discrete case, balance genuinely quantum effects, such as Bloch oscillations and Anderson localization, which can hinder the search of optimal solutions.
II. CONTINUOUS CASE
The SLK equation is the analogue of the Heisenberg-Langevin equation and represents a quantum analogue of classical motion with frictional force proportional to velocity [17,18]; it can be seen as a rough analogue of the classical Drude-Lorentz model of Ohmic friction, i.e. an approximate description of the motion of a quantum particle through matter with inelastic scattering. A solution ψ(t, x) = ρ(t, x)e iS(t,x) of the equation: satisfies the inequality d dt ψ(t) |H ν | ψ(t) ≤ 0 for β ≥ 0, H ν being the Hamiltonian (1). What we will show below is how the norm preserving dissipative evolution de-arXiv:0812.0694v1 [quant-ph] 3 Dec 2008 scribed by (2) can dynamically drive a suitable initial condition ψ(0, x) toward the ground state φ ν (x) of H ν . Toy model 1: Require (the parameters c ±,0 being chosen so that φ ν (x) > 0) to be the ground state of H ν and to belong to the eigenvalue E ν = 0. The above two requirements determine the potential under (2) for a time interval (0, t max ). It shows that, as ψ(t)|H ν |ψ(t) decreases with time, the "vacuum overlap" | ψ(t) | φ ν | 2 approaches the value 1. While the state ψ(t) approaches the ground state, some of the probability mass "tunnels" from the leftmost (local) to the rightmost (global) minimum. We point out that this class of examples, where both the ground state wave function φ ν (x) and the ground state energy E ν are known, allows also for the calibration of the numerical method. In our case we have used the built-in NDSolve resource of Wolfram Mathematica 6. Toy model 2: For the sake of comparison with classical literature on quantum annealing [12,20], we consider here a double-well potential of the form As shown in figure 2(a), for the parameters used there, the local minimum of the potential V (x) is wider than the global one. We refer the reader to section 2.2 of [12] and to [20] for a discussion of the meaning of the parameters and for the presentation of numerical experiments comparable with ours. Here, we use this well known toy model as an example in which the ground state is unknown and the dissipative dynamics of SLK type provides a method to find it. While the initial condition evolves ( figure 2(a)), the mean value of H ν decreases as in figure 2(b). That ψ(t max , x) is a good approximation of the ground state is shown by comparing, in the inset of figure 2 x) is the ground state belonging to the eigenvalue 0. Comparison of the two curves in the inset of figure 2(b) is, furthermore, suggestive of a real-time version of Piela's method of deformation of the potential energy hypersurface [9]. In the continuous case, then, the SLK Hamiltonian enriches quantum annealing of what can be seen as a probability percolation: while the state of the system converges toward the ground state, some probability mass tunnels from one minimum to another and, instead of tunneling back, as would happen in a reversible dynamics, remains there. This autonomous stabilization would represent an alternative to adiabatic quantum computation [15].
III. DISCRETE CASE
Most of the effort in Ref.
[4] went into making the intuition developed so far available in a context of combinatorial optimization. In such a context the domain of the function V to be minimized is a finite set Q and the search of the minimum of V is modeled on a graph (Q, E), where the edges e ∈ E describe the moves allowed in the search. For instance, in [4], Q was taken to be the Boolean hypercube Q n = {−1, 1} n , for some positive integer n, and an edge was placed between any two points in Q n separated by a unit Hamming distance. In this note, we consider the much simpler instance in which Q is, for some positive integer s, the finite set Λ s = {1, 2, . . . , s} equipped with the set of edges E = {{i, j} : (i, j) ∈ Λ s × Λ s ∧ |i − j| = 1} . According to the general approach outlined in [4], this amounts to a search of the minimum of the function V defined on Λ s by means of an interacting continuous time quantum walk [21] on Λ s governed by a Hamiltonian of the form We wish to show here that the quantum search outlined above can suffer from two, typically quantum, problems, namely Bloch oscillations [22] and Anderson localization [23] and that a certain amount of "viscous" friction can provide some relief to both these problems. On a finite box Λ s , we consider the evolution of an initial condition of the form c k (x) = x=1 2 +1 sin kπx +1 . We refer the reader to section 5 of [21] for a motivation of this choice: suffice here to say that it describes a spatially well located wave packet that in the absence of any potential moves back and forth, with speed close to 1, inside the box Λ s , as in figure 3(a). The effect on this ballistic evolution of a linear potential V (x) = −gx is shown in figure 3(b). For g = O(1/s), the peculiar energy-momentum relation E(p) = 1 − cos p, holding on a discrete lattice, determines Bloch oscillations that prevent the wave packet from approaching the point x = s at which the minimum of the cost function V (x) is located. Figure 3(b) is therefore a reminder of the fact that a greedy quantum optimization driven by the cost function itself acting as a potential can be hindered by the fact that, on a lattice, increasing momentum p can mean decreasing velocity v(p) = sin p. We propose here to introduce a certain amount of viscous friction in the discrete Schrödinger equation, as a "Kostin potential" K(t, x) ≈ β S(t, x) = β Arg (ψ(t, x)): The idea is that friction can prevent the momentum p from crossing the first Brillouin zone and thus can prevent the velocity sin p from being inverted before the wave packet reaches the boundary of Λ s . This unwanted inversion is illustrated in figure 3(b), the effect on it of a suitable Kostin potential is shown in figure 3(c).
As it is easy to check that, for ψ(t) evolving according to (5), it is the actual form of K(t, x) that we adopt in order to achieve decrease of ψ(t) |h| ψ(t) is K(t, x) = β x y=2 sin (S(t, y) − S(t, y − 1)), with β > 0. Figure 4(a) shows, instead, the effect, in the form of Anderson localization, of a random Gaussian potential of mean 0 and variance σ 2 , acting independently on each site of Λ s . The order of magnitude σ 0 = (10/s) 3/2 of the noise parameter σ is suggested by a scaling argument [24]. Whereas the fact that friction can wipe out Bloch oscillations is well known [19], the less well known fact that we show here is that the pseudo-ballistic motion shown in figure 3(c) is much more stable than the truly inertial motion represented in figure 3(a) with respect to the onset of Anderson localization.
IV. CONCLUSIONS AND OUTLOOK
As a final remark, we observe that the same framework developed in this note for the combinatorial optimization metaphor can be used, with minor changes, to describe an excitation travelling along a spin chain or a light pulse propagating through a waveguide lattice [25]. We conjecture, therefore, that SLK dynamics can be exploited also in those fields. For example, we can, maybe, increase the fidelity of state transmission, in presence of imperfections, along a spin chain, by applying a "tension" at both ends of it [26] (see figure 4(d)). The sole convergence toward the ground state could, instead, find applications in all-optical switching of light in waveguide arrays [27]: the injected light pulse can be steered toward a given position by a suitable tuning of the thermal gradient which determines the potential profile of the lattice. Future work should be devoted to further investigation of this open research problems. for g = 0, β = 0, σ = 2σ0 the probability of ever reaching the δ = 2 rightmost sites is negligible (Anderson localization); (c): g = 3g0, β = 4g0, σ = 2σ0: viscous friction allows the particle to drift to the right, by successive sojourns (the vertical strips) around successive minima of the Anderson potential; the ensuing slow transfer of the probability mass to the right of Λs is shown in frame (d). | 2008-12-03T11:18:05.000Z | 2008-12-03T00:00:00.000 | {
"year": 2008,
"sha1": "790d71ad1ef4a618ba2ba18b266bafb595cacdd9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.0694",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "790d71ad1ef4a618ba2ba18b266bafb595cacdd9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54502858 | pes2o/s2orc | v3-fos-license | Neutrinoless double-beta decay search with CUORE and CUORE-0 experiments
The Cryogenic Underground Observatory for Rare Events (CUORE) is an upcoming experiment designed to search for the neutrinoless double-beta decays. Observation of the process would unambiguously establish that neutrinos are Majorana particles and provide information on their absolute mass scale hierarchy. CUORE is now under construction and will consist of an array of 988 TeO2 crystal bolometers operated at 10 mK, but the first tower (CUORE-0) is already taking data. The experimental techniques used will be presented as well as the preliminary CUORE-0 results. The current status of the full-mass experiment and its expected sensitivity will then be discussed.
Introduction
Since the discovery of neutrino oscillations the interest in neutrino physics has increased, but some crucial questions concerning the nature of neutrinos remain open: the ordering and the absolute scale of the masses of the three generations, the charge conjugation and the lepton number conservation properties.If neutrinos are Majorana particles that differ from antineutrinos only by helicity, an important consequence is that lepton number violation must occur.The process of neutrinoless double-beta decay (0νββ) has the potential to provide insights on all these issues with unprecedented sensitivity.In fact, 0νββ is the most realistic process and, at present, the only practical mean of experimental investigation on these topics [1] [2].
Observation of the 0νββ process, that violates lepton number conservation, would demonstrate the Majorana nature of neutrinos.At the same time it would allow to set constraints on the absolute mass scale.It should be noted, however, that 0νββ could also be mediated by some exotic mechanism that would spoil most of the information concerning the neutrino mass; nevertheless it would still be the only way to probe the Majorana nature of neutrinos.
The neutrinoless double-beta decay process
Double-beta decay (2νββ) is a rare spontaneous nuclear transition (Z, A) −→ (Z +2, A)+2e − +2ν e in which a parent nucleus decays to a daughter with a simultaneous emission of two electrons.Within the Standard Model this is an allowed 2nd-order weak process already observed in different isotopes with an even number of neutrons and protons where the single-beta decay is either energetically forbidden or kinematically suppressed.The measured half-lives are as high as 10 18 -10 21 y, see e.g.[3].If neutrinos are Majorana particles, i.e. are identical to their own antiparticle, the ν e from one single beta decay may be absorbed in the second beta decay vertex (through helicity flipping) which would result in a final state without neutrinos and a lepton number violation of two units: (Z, A) −→ (Z + 2, A) + 2e − .Half-life limits have been set for several isotopes, no experimental evidence of 0νββ has been found to date except for a controversial claim in 76 Ge [7] which is hardly a e-mail: niccolo.moggi@bo.infn.itb Deceased compatible with more recent results, see for example [8][9][10].At present, a combination of results in 76 Ge yields T 0ν 1/2 > 3.0 • 10 25 y (90% C.L.).
The decay rate
The decay rate of the 0νββ process is proportional to the square of the so-called effective Majorana mass m ββ and can be expressed as: where G 0ν (Q, Z) is the phase-space factor (which can be calculated); M 0ν is the transition nuclear matrix element (which also can be calculated, but different models may disagree by a factor of two to three (see, e.g., [5]); and m 2 e − is the electron mass.m ββ measure a specific mixture of neutrino mass eigenvalues: Therefore form the half-life is possible to infer important information concerning the mass hierarchy (Δm 12 , ±Δm 23 ) and the neutrino absolute mass scale (m 0 ).Present data from neutrino oscillation experiments tend to favour a range of m ββ values between 10 and 50 meV for the inverted hierarchy and roughly a factor 10 smaller for normal hierarchy [1,6].
Signature
A convenient experimental signature is given by the combined energy of the two final state electrons that are emitted simultaneously.Since the nucleus is heavy enough that EPJ Web of Conferences all the energy is shared between the two electrons and the recoil is negligible, the 0νββ decay signature would be a monochromatic line at the transition energy (Q-value) of the decay, while for the 2νββ process a continuum spectrum between 0 and the Q-value would be observed.In both cases the distribution is smeared by the finite energy resolution of the detector and the tail of the 2νββ distribution may overlap the 0νββ peak, fig. 2. Should a 0νββ peak be observed, the half-life could be estimated as: where T is the time duration of the measure, ε the detection efficiency, N ββ the number of source nuclei, and N peak the number of observed 0νββ decays.
Bolometric technique
CUORE (Cryogenic Underground Observatory for Rare Events) will use TeO 2 crystals as bolometers to search for 0νββ decay of 130 Te.This technique was proposed by E. Fiorini and T.O.Niinikoski in 1984 [11].A bolometer is a calorimeter composed of an energy absorber, in which the energy deposited by a particle is converted into phonons, and a sensor that converts thermal excitation (temperature rise) into a readable electric signal.In our experimental setup the TeO 2 crystals contain the decay isotope ( 130 Te) and, at the same time, act as detectors (the absorber material).
The temperature rise ΔT is related to the energy release ΔE and can be written as ΔT = ΔE/C where C is the heat capacity of the bolometer.When the crystals are cooled down to very low temperatures (few mK), C becomes so small that few keV deposited into the detector will produce a measurable temperature rise.At the projected base temperature of about 10 mK the typical signal amplitude is ΔT = ΔE∼10-20 μK/MeV [4].The accumulated heat flows then to a heat bath through a thermal link so that the absorber returns to the base temperature (this is reached in less than 5 s).The temperature rise resulting from a single nuclear decay is measured by a thermistor.A Neutron Transmutation Doped thermistor is glued on each crystal.Since the thermal response of bolometers vary with the temperature, a silicon Joule heater is glued to each crystal for the offline correction of thermal gain drift caused by thermal gain with time.
The energy resolution of a bolometer is limited, in principle, only by the thermodynamical fluctuations of thermal phonons through the thermal link, and does not depend on the deposited energy ΔE.In practice, ΔE is dominated by other noise contributions; still, by using large mass bolometers, an excellent energy resolution and high detection efficiency can be achieved.Bolometers have also the advantage that they can be built with a wide range of materials, so several isotopes could be studied with this technique.The main drawbacks are that the thermal origin of the signal makes them intrinsically slow and that no event topology recognition is possible.
Sensitivity
For our kind of experimental setup, the sensitivity can be computed from simple arguments.The expected number of 0νββ events (mean value), in a period T of time, is where (i.a.) is the isotopic abundance of the decay isotope, M the overall active mass and ε the detector efficiency.
In the same time period, for any experiment in which the source is embedded in the detector, the background B is given by: where b is the background rate per unit detector mass (counts/(keV kg y)), and the energy window of the measure was approximated with the energy resolution ΔE [13].
Given the background, the discovery potential of the experiment is given by the minimum signal counts that allow to reject the background-only hypothesis at a given significance n σ given in terms of gaussian standard deviations The discovery potential (sensitivity) can be then expressed in terms of the 0νββ half-life, for a given significance n σ , as: which links the sensitivity to the detector parameters.These parameters will be discussed in par.4. Expression 6 holds as long as the number of background events is large enough to be considered gaussian.For low background experiments Poisson statistics should be used and, in the limit of zero background, the expression for the sensitivity would change into: T 0ν 1/2 ∝ (i.a.) ε T M where the sensitivity scales linearly with the detector mass.
A phased search program
The CUORE collaboration is going through a phased search program at the underground Gran Sasso National Laboratories where the flux of cosmic radiation is strongly reduced with respect to the sea level.Such program begun in 2003 with the Cuoricino experiment [12] (ended in 2008).The program resumed in 2013 with the CUORE-0 experiment built to demonstrate the feasibility of a largescale bolometric experiment (CUORE) and its potentials, and to test the stringent procedures to be adopted in the assembly line and many design improvements.This program will continue with CUORE, the full-mass setup.All experiments share the same bolometric technique and are built on radio-pure TeO 2 cubic crystals.The crystals are 5×5×5 cm 3 in size 1 arranged in a compact array structure ("tower"), each floor consisting of 4 crystals.
Detector parameters 4.1 Isotopic abundance and detector mass
The choice of Tellurium is due to its high natural isotopic abundance (34.2%) of the 0νββ decay candidate [14].Also, the Q-value around 2528 keV of the decay [15][16][17] falls in a relatively clean window in which to look for the 1 Cuoricino tested also some smaller crystals.signal, between the peak and the Compton edge of the 2615 keV gamma line of 208 Tl.
Cuoricino and CUORE-0 have 62 and 52 crystals respectively, with roughly the same detector mass of about 40 kg, organised in a single tower enclosed in a copper thermal shield and installed in the (same) cryostat.Cuoricino set the current lower limit for the half-life of 130 Te to T 0ν 1/2 > 2.8 • 10 24 y (90% C.L.) [12].CUORE-0, which is now taking data, was assembled with the same materials as CUORE, and according to the same radiopurity constraints imposed on all the materials facing the detectors and on the detectors themselves to reduce the background sources.Therefore it represents an opportunity to evaluate the bolometric performance of CUORE but, as a standalone experiment, will be able to produce an improvement with respect to the Cuoricino results.
CUORE will consist of 988 crystals arranged in 19 towers.The total detector mass will be 741 kg and the 130 Te isotope mass is 206 kg.It was designed to search for 0νββ in 130 Te with the best sensitivity to date and will contribute in demonstrating the viability of future large-scale bolometric experiments.
Background
The background is the main limit to the CUORE sensitivity.To accomplish its suppression the first step was to identify the main sources of background in Cuoricino.The dominant source (50±20)% in the energy region of interest (ROI, around the Q-value) was found to be from α particles emitted by contaminations of 238 U, 232 Th, 210 Pb present on the surface of the copper parts that hold the crystals and of the materials facing the bolometers.Another (10±5)% was due to α from the same contaminants on the bolometer surfaces.Both these contribution are reduced in CUORE-0 with respect to Cuoricino thanks to the controlled construction materials and to the new dedicated surface-cleaning procedures developed for the handling and cleaning of each detector component.The cleaning procedure for the copper frames, in particular, was verified with a dedicated bolometric measurement [18].Above the 2615 keV 208 Tl peak the γ background becomes negligible and the α background dominates (fig.5).
The second largest source (30±10)% was due to the γ from 208 Tl originated by the decay chain of the 232 Th contamination in the cryostat materials.In CUORE this component is expected to be negligible thanks to the better shielding of the detector from the cryostat.
The background due to cosmics and environmental radiation in ROI is orders of magnitude smaller than that from the apparatus itself thanks to a combination of the underground location and several shielding layers both outside and inside the cryostat.One of the shielding layers is made of lead bricks recovered from an ancient roman ship sunk offshore the Sardinia coast around year 50 b.c.[19].
Finally, the tail from 2νββ is also negligible thanks to the excellent energy resolution.
The CUORE design background is 0.01 counts/(kev kg y).The overall backgrounds in ROI is reported in table 1.
Energy Resolution
The energy resolution in the ROI was evaluated, with the CUORE-0 first phase data, as the FWHM of the 2615 keV photo peak originated by the 208 Tl decay (which is close to the Q-value of 2527.5 keV).The overall detector resolution was found to be 5.7 keV [20].During some R&D runs the target energy resolution of 5 keV was consistently achieved [4], and in phase II of CUORE-0 reached 4.8 keV.It should also be noted that the R&D tests and CUORE-0 run at a base temperature of about 13 mK, higher than the CUORE projected base temperature of 10 mK.In conclusion it is expected that the CUORE performance will reach the project value of ΔE 5 keV or better.
Efficiency
The selection efficiency is evaluated mainly using the 2615 keV γ peak which offers sufficient statistics at the energy closest to the ROI.The physical detector efficiency alone
CUORE-0 Preliminary Results
CUORE-0 is operated in the same cryostat, uses the same external shielding, and is enclosed in the same Faraday cage that was used for Cuoricino.The front-end electronics and the data acquisition hardware are also the same.
The offline data analysis follows the procedure developed for Cuoricino [12,21].Each bolometer has an independent signal trigger threshold and is pulsed periodically with a fixed, known energy through the heater.The pulsed energy events are used to correct for small shifts in thermal gain of the bolometers.Bolometer signals are amplified, filtered, digitized and then converted into energies using calibration data.Each channel is periodically calibrated using γ rays from daughter nuclei of 232 Th in the energy range from 511 to 2615 keV.Events occurring within ±100 ms of each other in any two or more crystals are attributed to background and therefore rejected.The pile-up selection cut requires that only one pulse exists in a 7.1 s window around the measured trigger time.Subsequent selections impose some shape requirements on the signal pulses.CUORE-0 has been taking data in stable operating conditions from March to September 2013 ("phase I") and from November 2013 to June 2014 ("phase II").A third phase is now going on.Phase I data were published in [20]; the preliminary results shown in fig.7 refer to the data collected during phases I and II.The accumulated TeO 2 exposure on 49 fully active channels is 18.06 kg•y for a 130 Te isotopic exposure of 5.02 kg•y.
At present the data in the ROI is blinded while more statistics is being accumulated and the event selection is optimised.A blinded fraction of events within ±10 keV of the 2615 keV γ peak is randomly exchanged with events within ±10 keV of the 0νββ Q-value.Since the number of 2615 keV γ events is much larger than the number of possible 0νββ events, the blinding algorithm produces an artificial peak around the 0νββ Q-value.CUORE-0 already demonstrated the feasibility of instrumenting an ultra-pure ton-scale bolometer array like CUORE.
Constructions status
While CUORE-0 is taking data, the collaboration is building the full-mass scale experiment, CUORE (fig.8), with the goal to begin data taking in the summer of 2015.The assembly process consists in four main steps.Thermistors and heathers are glued to the crystals, the instrumented crystals are assembled together into a single tower, the readout cables are attached to the tower, for each crystal the thermistor and heather chips are bonded to the readout cable trays.All the assembling procedures are performed in glove boxes flushed with nitrogen in an underground clean room with custom designed tools that make the whole procedure semi-automatic.
At present, all 19 towers have been assembled, instrumented, and stored in nitrogen-flushed atmosphere while waiting to be installed in the cryostat (fig.9).The collaboration is now moving toward detector integration, which includes the commissioning of the new cryostat, the installation of the calibration system, data acquisition system, faraday cage and other auxiliary systems like the slow control and monitoring system.
Projected sensitivity
The closely-packed CUORE detector geometry will carry some benefits with respect to CUORE-0.First, it will enable a significant improvement in the anti coincidence analysis between close crystals, further reducing the background.Second, the fraction of crystals facing the contaminated inner shield will be reduced.Additionally, the new cryostat is built with cleaner materials and better shielded.
CUORE-0 has demonstrated that the CUORE design parameters of equation 6 reported in table 2, in particular the total mass, the background rate level and the energy resolution, can be reached.With such parameters, it is possible to compute the final sensitivity of CUORE as a function of the live-time.An overview of the 1σ sensitivity of CUORE-0 and CUORE is shown in fig. 10 CUORE will fully explore the half-life corresponding to the claim of observation in 76 Ge and will allow the investigation of the upper region of the effective Majorana neutrino mass phase space for inverted hierarchy of neutrino masses.
Beyond CUORE
Next generation experiments should be able to explore the whole inverted hierarchy region of effective Majorana masses.Should the next generation experiments fail in finding 0νββ, it may still be possible, thanks to the input from other experiments, to draw some conclusions on the nature of neutrinos: if neutrino is proven to be a Majorana particle, then the mass hierarchy would have to be normal; on the other hand, if the mass hierarchy is proven to be inverted, then the neutrino would have to be a Dirac particle.
Since it is unlikely that CUORE itself will be able to reach a rate of 0.001 counts/(keV kg y), several R&D programs are already underway investigating new ideas and techniques for active background rejection.In CUORE the sensitivity is mainly limited by background which is characterised by α and β/γ emission in the MeV region.Particle identification is therefore the main emphasis on bolometers future application.To this end, additional detection channels are needed, since the absorber does not respond differently for energy releases of different particle types.To distinguish signal electrons from α background, light emission can be used, either from Cherenkov radiation [22,23] or scintillation light [24,25,28,29], where the auxiliary light detector is usually another bolometer facing the main one.Recently new studies on scintillating bolometers showed the possibility to distinguish α from β/γ particles without light readout thanks to a different time-dependent shape of the heat signal [30,31].Alternative methods based on the identification of surface interactions have also been devised (see, e.g.[26,27]).
Conclusions
A brief introduction to the bolometric technique used to search for neutrinoless double-beta decay was given and the main experimental challenges outlined.The physics reach of CUORE-0 and CUORE was illustrated.
CUORE-0 is at present the most sensitive experiment for searching 0νββ in 130 Te.Although it's still taking data, it has already confirmed that the design parameters of the full-size experiment, CUORE, can be reached, in particular with respect to the energy resolution and the background rate.
CUORE is now being built and it is expected that it will begin taking data in 2015.With excellent energy resolution and large isotope mass, CUORE is one of the most competitive 0νββ experiments under construction.The target background of 0.01 counts/(keV kg y) seems within
Figure 2 .
Figure 2. Energy spectrum of the electrons of the 2νββ and 0νββ decays.
Figure 6 .
Figure 6.The 208 Tl decay line used to estimate the energy resolution in CUORE-0.
be (87.4±1.1)% and represents the containment of the detector to double-beta decay signals.By including the trigger and selection efficiencies (see par. 5) the total 0νββ detection efficiency of CUORE-0 is estimated to be (77.6±1.3)%.
Figure 7 .
Figure 7. CUORE-0 energy spectrum: preliminary results of data taking phases I and II.
. A half-life sensitivity close to 10 25 years is expected for a EPJ Web of Conferences 03004-p.6
Figure 10 .
Figure 10.1σ sensitivity for CUORE-0 and CUORE computed using the experimental parameters given in table 2.
Figure 11 .
Figure 11.CUORE 1σ sensitivity in terms of effective Majorana mass for 5 years live time computed using the experimental parameters given in table 2. The bands corresponds to the maximum and minimum m ββ values obtained from different nuclear matrix elements.
Table 1 .
Total background in ROI and in the α region, in counts/(keV kg y).For CUORE the predicted value is reported.
Table 2 .
Experimental parameter values used for the sensitivity of CUORE-0 and CUORE.Symbols are defined in equation 6. | 2018-12-02T00:20:36.559Z | 2015-03-24T00:00:00.000 | {
"year": 2015,
"sha1": "39f350d50b444d7d92adc46692e97453901766d1",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/09/epjconf_ismd2015_03004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "60926a6e0518bc3e21953804fcfe2ebef2982d40",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
224305220 | pes2o/s2orc | v3-fos-license | Screening Social Determinants of Health in a Multidisciplinary Severe Asthma Clinical Program
Supplemental Digital Content is available in the text.
INTRODUCTION
Asthma is the leading cause of chronic disease in the pediatric population in the United States, affecting over 6 million children and 8% of the population. 1 It accounts for over $82 billion per year in costs associated with health care utilization. 2 Children from minority ethnicities and living in poverty have higher prevalence rates of asthma and suffer from significant disparities with increased morbidity and mortality than agematched white children. 3,4 In San Diego County, asthma-related emergency department (ED) visits are 5 times higher for black children (109.6 per 10,000) and 2 times higher for Hispanic (44.7 per 100,000) children than for white children (20.9 per 10,000). 5 The complex social and environmental factors that shape these health disparities are referred to as the social determinants of health (SDH) and powerfully impact chronic conditions like asthma. 6 Low-income housing disproportionately exposes children to multiple indoor allergens and outdoor pollutants that can exacerbate asthma symptoms. 7,8 Poor children are also exposed to higher levels of family turmoil, violence, separation, instability, and chaotic household conditions. 9 Severe asthma in children is defined as persistent uncontrolled asthma despite maximal therapy. This term comprises asthma-mimicking conditions, asthma that is difficult to treat because of comorbidities, improper inhaler technique or poor therapeutic adherence, and other environmental factors, and true severe therapy-resistant Abstract Introduction: Asthma is the most common cause of chronic disease in children and has high healthcare utilization costs. Minority children living in poverty have a higher asthma burden. These health disparities are associated with the social determinants of health (SDH). A severe asthma clinic was implemented at Rady Children's Hospital in San Diego to determine whether a multidisciplinary approach, including an asthma home visit addressing SDH, would lead to decreased healthcare utilization in terms of emergency department (ED) visits and hospitalizations. Methods: Patients with 2 or more ED visits in the past 6 months or 2 or more hospitalizations in the previous year were recruited to Rady Children's Hospital Severe Asthma Clinic. A multidisciplinary team evaluated each patient systematically. A subset of patients on capitated Medicaid insurance plans also had a comprehensive asthma home visit with community health workers as part of the Community Approach to Severe Asthma (CASA) program. Results: A significant reduction in ED visits (75%, P < 0.001) and hospitalization days (73%, P < 0.001) was demonstrated in 74 Severe Asthma Clinic participants with 1 year of pre-/postdata to analyze. In a subset of 12 patients in the CASA program, further reductions in ED visits (90%, P = 0.002) were also demonstrated. Basic needs, including shelter, food, and assistance with utilities, were the most common domain of SDH identified and addressed in CASA participants. Conclusion: We demonstrate that a novel pediatric severe asthma clinic with a multidisciplinary approach, including actively addressing SDH, is associated with decreasing health care utilization. (Pediatr Qual Saf 2020;5:e360; doi: 10.1097/pq9.0000000000000360; Published online September 25, 2020.) asthma, as defined by the latest European Respiratory Society/American Thoracic Society definition. 10 Dedicated severe asthma programs have shown efficacy in reducing asthma burden in the literature. 11 Successful asthma programs, including the Community Asthma Initiative in Boston 12 target high-risk patients, provide education and home environment assessment, and coordinate community, public health, and social services. 13 The Severe Asthma Clinic (SAC) at Rady Children's Hospital in San Diego began in 2015 intending to provide coordinated care to children with severe asthma and reduce health disparities. Although the SAC sees all patients with severe asthma, patients with Medicaid or California Covered Services insurance predominate, 74% (Table 1). These insurance programs are available to low-income children and funded at the federal and state level, respectively. Although multidisciplinary clinics and home visiting programs have demonstrated success in the past, they have infrequently screened SDH. 14 In this novel observational retrospective analysis, we aimed to determine whether a dedicated severe asthma service in combination with a home visiting program that screened SDH would lead to a decrease in asthma-related ED visits and hospitalization days in this specific pediatric population.
METHODS
Inclusion criteria to SAC included children age 2-18 with 2 or more ED visits or hospitalizations in the previous 6 months due to an asthma-related episode as the primary diagnosis. Patients meeting these criteria are referred from Rady inpatient hospitalists and specialist physicians within the Allergy/Immunology and Pulmonology divisions. A multidisciplinary team, including allergy/ immunology, pulmonology, clinical pharmacy, respiratory therapy, and nurse case management, sees each SAC patient. The visits focus on confirmation of diagnosis, evaluation of comorbidities, and optimization of medication and adherence. Patients have follow-up visits in SAC every 3 months; about 4 visits annually. The no-show rate for the SAC clinic is 15%.
With funding from the California Department of Public Health (CDPH), a subset of SAC patients on Rady capitated Medi-Cal plans also had an asthma home visit soon after their first SAC visit. This home visit was scheduled with Rady bilingual community health workers (CHWs) as part of the Community Approach to Severe Asthma (CASA) program. CDPH and Rady physicians, social workers, and nurses trained the CHWs on how to interact with families and perform asthma home visits. The CASA program is shaped on principles of the social-ecological model. This model emphasizes several layers of influence, including individual, interpersonal, organizational, community, and local public policy that can affect asthma management and outcomes for patients.
The program employs several quality improvement strategies to optimize success. The team, composed of CHWs, a social worker, a program coordinator, and a physician, took a Kaizen approach by holding a weekly meeting focused on making improvements to help decrease asthma-related events for our patient population. 15 This team approach was chosen due to the various skills of team members. At the onset, the team invited community stakeholders to discuss how to best provide resources to families in the San Diego community with Rady Children's Hospital Asthma Forum in November 2016. The team then performed vital informant interviews with different asthma programs throughout the United States, including Alameda Asthma Start, 16 the Respira Sano project in Imperial Valley, 17 the Yes We Can Urban Asthma Partnership, 18 and the Boston Home Asthma Visit Collaborative 19 to help shape this program and evaluation metrics.
Like these programs, the main goal was to evaluate asthma control in the patient population by demonstrating a reduction in ED visits and hospitalizations. These programs are all informed by the National Asthma Education and Prevention Program guidelines that recommend environmental trigger reduction in patients with asthma to improve asthma control. 20 The team then developed the process map integrating the CASA program with the SAC with this input (Fig. 1). After initiating the program, the team performed regular Plan-Do-Study-Act cycles to ensure coordination between the SAC, CASA program, and each patient was optimized. Lastly, the recognition of CASA team members occurred at the annual Rady Children's Hospital Research Symposium. This symposium also allowed for constructive feedback regarding the CASA program.
At the home visit, CHWs conduct a walk-through environmental assessment for asthma triggers developed from Boston Asthma Home Visit Collaborative 19 and, with assistance from the CDPH, provided CDPH recipes for simple cleaning solutions. Furthermore, the CHWs reinforce clinician SAC instructions, including medication changes, inhaler technique, and home environment recommendations such as obtaining dust mite encasements or mold abatement. They set behavioral/environmental change goals based on the social cognitive theory, which suggests that creating an environment conducive to change makes adopting positive behaviors easier. Lastly, they facilitate an opt-in link with 211-San Diego (211) to screen for SDH. The CHWs called families to remind them about upcoming SAC appointments and related patient concerns to the SAC. The local nonprofit organization, 211, provides access to 6,000 community, health, social, and disaster services in San Diego. They developed a specific risk assessment tool for this project with 14 SDH domains based on Healthy People 2020 Strategies. 21 The 211 service provided resources based on risk assessment responses and follow-up with families by phone to monitor the reduction in risk throughout 12 months. They also provided monthly reports on referrals to the CASA program allowing further follow-up and coordination with the SAC physicians in follow-up clinic visits. As the goal was improving patient outcomes, the University of California, San Diego Institutional Review Board determined this project to be a quality improvement project and waived the requirement for Institutional Review Board approval.
RESULTS
The SAC saw 86 patients from 2017 to 2019. At the time of submission, 10 patients had not followed up for 1 year postintervention data collection, and 2 had moved out of state (2.3% lost to follow-up), leaving 74 patients with 12 months of pre-/postintervention data to analyze from Rady's electronic medical record system for a primary diagnosis of asthma-related events. A paired t test was used to analyze the change in the number of visits preand postintervention. McNemar test was used to analyze the change in the number of patients who had visits preand postintervention. A cutoff of 2 visits was used to create binary variables. All patients had poorly controlled asthma (by ERP Asthma guidelines) 20 and were on inhaled corticosteroids before their first SAC visit. Nested within the SAC were 12 (16%) patients who had a home visit as part of the CASA home visit program. The patients were primarily Hispanic (55%) and black children (19%) with Medi-Cal (Medicaid) and California Covered Services insurance (74%) ( Table 1). Significant reductions were seen in patients who had 2 or more ED visits in both the SAC (pre: 54, post: 13, 75% reduction, P < 0.001) and the nested CASA program (pre: 11, post: 1, 90% reduction, P = 0.002). The SAC population showed significant reductions in patients with 2 or more hospitalization days (pre: 45, post: 12, 73% reduction, P < 0.001). The CASA program demonstrated a decreasing trend (pre: 6, post: 2, 77% reduction, P = 0.125). The results are summarized in Figure 2A.
A P-chart was created to visually describe what percent of patients had ED visits (Fig. 2B) and/or hospitalization (Fig. 2C) each month before and after the program start for the SAC and nested CASA group. Three-sigma limits were used to calculate upper and lower control limits (Fig. 2B, C). Generally, a higher percentage of patients had ED visits and hospitalizations in the 2 to 3 months before the intervention. The percentage of patients who had ED visits and hospitalizations after the intervention remained low for 12 months postintervention.
The 211 health navigators provided a comprehensive assessment of unmet social, behavioral, and health needs. Of the 12 participants contacted by the 211 staff, 88 SDHrelated needs divided into different domains were identified utilizing the telephone questionnaire risk assessment tool. Responders indicated the "Basic Needs" domain (food, shelter, and utilities) as the most common need (37%), with "utilities" most commonly being identified in the group (Table S1, Supplemental Digital Content, which shows 211 needs assessment for CASA participants. Social Determinants of Health domains are divided into a first and second level in the first and second columns, respectively. Taxonomy of resources provided are identified in the third column. Number of participants with the specific taxonomic resources provided are identified in the forth column, http://links.lww.com/PQ9/A215).
DISCUSSION
Pediatric patients seen at the SAC demonstrated a significant decrease in ED visits and hospitalizations. A multidisciplinary approach allows for the coordination of the expertise needed to manage the complexities of severe pediatric asthma. 22,23 In a subset of patients in the CASA program who had a CHW asthma home visit and a 211 referral to manage their asthma further and address SDH issues, respectively, decreases in hospitalization days were demonstrated. Although the sample size is small, these findings are consistent with the literature that show decreased healthcare utilization (ED visits and hospitalizations) with CHW asthma home visit programs. 2,11,12 CHW asthma home visit programs provide asthma self-management support with less time and access limitations than in the clinic setting and are increasingly being utilized in asthma management. 12 Data from our P-chart analysis showed a higher percentage of ED visits and hospitalization days in the 2 to 3 months before the intervention suggesting that recruiting patients soon after an ED visit or hospitalization may increase participation in the intervention. We also demonstrated that our intervention had a sustained effect over the 12 months postintervention.
By beginning to assess SDH with the 211 partnership, the CASA program was able to identify other barriers to care, including transportation, stable housing, and access to healthy food. Based on their assessment results, the 211 health navigators established a care plan, including a direct link to services. This plan included helping families access housing lists, filling out applications, and following the progress of applications with various agencies (Table S1, Supplemental Digital Content, which shows 211 needs assessment for CASA participants. Social Determinants of Health domains are divided into a first and second level in the first and second columns, respectively. Taxonomy of resources provided are identified in the third column. Number of participants with the specific taxonomic resources provided are identified in the forth column, http://links.lww.com/PQ9/A215).
The findings in this report are subject to several limitations. First, this evaluation was not a randomized controlled trial, so the team cannot account for a placebo effect or regression to the mean. The contributions of individual components of the program to outcomes were not evaluated. The team is planning a randomized controlled trial based on results to address this limitation. Also, the administrative data may not include all patients outside of Rady's network for emergency care. However, Rady's electronic medical record does capture encounter level data with other institutions, which limits this possibility.
Lastly, the small sample size potentially limits the interpretation of the results. Similar findings in a replicated but larger sample size would be even more compelling.
CONCLUSIONS
Based on thorough research, this SAC is thought to be the first subspecialty multidisciplinary pediatric asthma clinic to actively screen SDH domains, and the findings indicate a multidisciplinary approach to asthma with a home visit and screening of SDH will help decrease healthcare Fig. 2. Comparison of healthcare utilization before and after intervention. A, Pre-and postchanges in ED visits (2 or more) and hospitalization days (2 or more) of subjects in SAC (N = 74) and the subset who participated in the CASA program (N = 12). Both pre-and postrecord contains each patient's visits/hospitalization days for 12 months. B, P-chart of pre-and postintervention changes in the percentage of subjects in SAC and the CASA program who had ED visits over the 12 months before and after the intervention. C, P-chart of pre-and postintervention changes in the percentage of subjects in SAC and the CASA program who had hospitalization days over the 12 months before and after the intervention. utilization in a pediatric population with severe asthma. Future research of this approach at Rady Children's Hospital has begun and will be elucidating if similar findings are confirmed as the program is scaled up.
DISCLOSURES
Dr Phipatanakul is a consultant for Genentech, Novartis, Regeneron, GSK, AstraZeneca, Sanofi, and Teva; received additional funding support from NIH K24 AI 106822; received grant support from Genentech, Novartis, Regeneron, GSK, Kaleo, Monaghan, Alk Abello, Lincoln Diagnostics, and Thermo Fisher; and all financial relationships are unrelated to the content of this article. Dr Leibel is a consultant for Thermo Fisher and Genentech; Received grant support from Genentech; and all financial relationships are unrelated to the content of this article. Dr Geng is a consultant for Genentech, Novartis, CSL, Shire, RMS, and Diplomat; is a speaker for Regeneron, CSL, Optinose, Mead-Johnson, and Horizon; received grant support from Genentech, Novartis, GSK, and Stallergenes; and all financial relationships are unrelated to the content of this article. The other authors have no financial interest to declare. | 2020-10-19T13:45:25.017Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "8ed96d8ca625fdddecb6e3849f957dc671605bdf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/pq9.0000000000000360",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ed96d8ca625fdddecb6e3849f957dc671605bdf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155102334 | pes2o/s2orc | v3-fos-license | Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK Biobank participants
Background Identifying people at risk of cardiovascular diseases (CVD) is a cornerstone of preventative cardiology. Risk prediction models currently recommended by clinical guidelines are typically based on a limited number of predictors with sub-optimal performance across all patient groups. Data-driven techniques based on machine learning (ML) might improve the performance of risk predictions by agnostically discovering novel risk predictors and learning the complex interactions between them. We tested (1) whether ML techniques based on a state-of-the-art automated ML framework (AutoPrognosis) could improve CVD risk prediction compared to traditional approaches, and (2) whether considering non-traditional variables could increase the accuracy of CVD risk predictions. Methods and findings Using data on 423,604 participants without CVD at baseline in UK Biobank, we developed a ML-based model for predicting CVD risk based on 473 available variables. Our ML-based model was derived using AutoPrognosis, an algorithmic tool that automatically selects and tunes ensembles of ML modeling pipelines (comprising data imputation, feature processing, classification and calibration algorithms). We compared our model with a well-established risk prediction algorithm based on conventional CVD risk factors (Framingham score), a Cox proportional hazards (PH) model based on familiar risk factors (i.e, age, gender, smoking status, systolic blood pressure, history of diabetes, reception of treatments for hypertension and body mass index), and a Cox PH model based on all of the 473 available variables. Predictive performances were assessed using area under the receiver operating characteristic curve (AUC-ROC). Overall, our AutoPrognosis model improved risk prediction (AUC-ROC: 0.774, 95% CI: 0.768-0.780) compared to Framingham score (AUC-ROC: 0.724, 95% CI: 0.720-0.728, p < 0.001), Cox PH model with conventional risk factors (AUC-ROC: 0.734, 95% CI: 0.729-0.739, p < 0.001), and Cox PH model with all UK Biobank variables (AUC-ROC: 0.758, 95% CI: 0.753-0.763, p < 0.001). Out of 4,801 CVD cases recorded within 5 years of baseline, AutoPrognosis was able to correctly predict 368 more cases compared to the Framingham score. Our AutoPrognosis model included predictors that are not usually considered in existing risk prediction models, such as the individuals’ usual walking pace and their self-reported overall health rating. Furthermore, our model improved risk prediction in potentially relevant sub-populations, such as in individuals with history of diabetes. We also highlight the relative benefits accrued from including more information into a predictive model (information gain) as compared to the benefits of using more complex models (modeling gain). Conclusions Our AutoPrognosis model improves the accuracy of CVD risk prediction in the UK Biobank population. This approach performs well in traditionally poorly served patient subgroups. Additionally, AutoPrognosis uncovered novel predictors for CVD disease that may now be tested in prospective studies. We found that the “information gain” achieved by considering more risk factors in the predictive model was significantly higher than the “modeling gain” achieved by adopting complex predictive models.
Introduction
Globally, cardiovascular disease (CVD) remains the leading cause of morbidity and mortality [1].Current clinical guidelines for primary prevention of CVD emphasize the need to identify asymptomatic patients who may benefit from preventive action (e.g., initiation of statin therapy [2]) based on their predicted risk [3][4][5][6].Different guidelines recommend different algorithms for risk prediction.For example, the 2010 American College of Cardiology/American Heart Association (ACC/AHA) guideline [7] recommended use of Framingham Risk Score [4], whereas the 2016 European guidelines recommended use of the Systematic Coronary Risk Evaluation (SCORE) algorithm [8].In the UK, the current National Institute for Health and Care Excellence (NICE) guidelines recommend use of the QRISK2 score to guide the initiation of lipid lowering therapies [9,10].
Existing risk prediction algorithms are typically developed using multivariate regression models that combine information on a limited number of well-established risk factors, and generally assume that all such factors are related to the CVD outcomes in a linear fashion, with limited or no interactions between the different factors.Because of their restrictive modeling assumptions and limited number of predictors, existing algorithms generally exhibit modest predictive performance [11], especially for certain sub-populations such as individuals with diabetes [12][13][14][15] or rheumatoid arthritis [3].Data-driven techniques based on machine learning (ML) can improve the performance of risk predictions by exploiting large data repositories to agnostically identify novel risk predictors and more complex interactions between them.However, only a few studies have investigated the potential advantages of using ML approaches for CVD risk prediction, focusing only on a limited number of ML methods [16,17] or a limited number of risk predictors [18].
Here, we aim to assess the potential value of using ML approaches to derive risk prediction models for CVD.We analyzed data on 423,604 participants without CVD at baseline in UK Biobank, a large prospective cohort study in which participants were recruited from 22 centers throughout the UK.We used a state-of-the-art automated ML method (AutoPrognosis) to develop ML-based risk prediction models and evaluated their predictive performances in the overall population and clinically relevant sub-populations.In this paper, we do not focus on the algorithmic aspects of the ML methods involved and rather focus on their clinical application.Methodological details on our automated ML algorithm can be found in our technical publication in [19].
Study design and participants
Participants were enrolled in the UK Biobank from 22 assessment centers across England, Wales, and Scotland, during the period spanning from 2006 to 2010 [20].We extracted a cohort of participants who were 40 years of age or older and had no known history of CVD at baseline.That is, patients with previous history of coronary heart disease, other heart disease, stroke, transient ischaemic attack, peripheral arterial disease, or cardiovascular surgery were excluded from the analysis.The total number of participants who met the inclusion criteria was 423,604.The last available date of participant follow-up was Feb 17, 2016.UK Biobank obtained approval from the North West Multi-centre Research Ethics Committee (MREC), and the Community Health Index Advisory Group (CHIAG).All participants provided written informed consent prior to enrollment in the study.The UK Biobank protocol is available online [21].
The UK Biobank dataset keeps track of a large number of variables for each participant, but most of those variables are missing for most patients.In order to include the maximum possible number of (informative) variables in our analysis, we included all variables that are missing for less than 50% of patients with CVD outcomes.This corresponded to a rate of missingness of 85% for the entire population of participants.Our rationale for assessing the missingness rate among patients with CVD is that missingness itself maybe informative (i.e., the chance of a variable being missing may depend on the outcome).By excluding all variables that were missing for more than 85% of the participants, a total of 473 variables were included in our analysis.We categorized all variables in the UK Biobank into 9 categories: health and medical history, lifestyle and environment, blood assays, physical activity, family history, physical measures, psychosocial factors, dietary and nutritional information, and sociodemographics [22].The (categorized) lists of variables involved in our analysis are provided in the supporting information (S1 to S9 Tables).
Outcome
The primary outcome was the first fatal or non-fatal CVD event.A CVD event was defined as the assignment of any of the ICD-10 diagnosis codes F01 (vascular dementia), I20-I25 (coronary/ischaemic heart diseases), I50 (heart failure events, including acute and chronic systolic heart failures), and I60-I69 (cerebrovascular diseases), or any of the ICD-9 codes 410-414 (ischemic heart disease), 430-434, and 436-438 (cerebrovascular disease).Follow-up data was obtained from the hospital episode statistics (a data warehouse containing records of all patients admitted to NHS hospitals), and the equivalent datasets in Scotland and Wales [23].
Models tested
Framingham Risk Score.At the time of conducting this study, the UK Biobank had not yet released data on the participants' total cholesterol, HDL cholesterol and LDL cholesterol, which are used as predictors in various established algorithms, such as Framingham score [4], ACC/AHA [24], QRISK2 [9], and SCORE [5].The Framingham score, however, provides an incarnation of its underlying model based on nonlaboratory predictors, which replaces lipids with Body Mass Index (BMI) [4].Since BMI is currently collected for 99.38% of the UK Biobank participants, we compared our model with the BMI version of the Framingham score.We used the published predicting equations (beta-coefficients and survival functions) of the BMI-based Framingham model developed in [4].(Framingham risk calculator and model coefficients are publicly available in: https://www.framinghamheartstudy.org.) The Framingham score is based on 7 core risk factors: gender, age, systolic blood pressure, treatment for hypertension, smoking status, history of diabetes, and BMI.All of those variables were complete for the participants in the extracted cohort, with the exception of systolic blood pressure (missing for 6.8% of the participants), and BMI (missing for 0.62% of the participants).We used the MissForest non-parametric data imputation algorithm [25] to recover the missing values.Using the MissForest algorithm, we sampled 5 imputed datasets and averaged the model predictions for each participant on the 5 datasets (this is known in the literature as Rubin's rules [25]).The number of imputed datasets was selected via cross-validation.
Cox proportional hazards model.We evaluated the performance of two Cox Proportional Hazards (PH) models derived from the analysis cohort: a model that only uses the traditional 7 risk factors used by the Framingham score, and a model that uses all of the 473 variables in the UK Biobank.To fit the Cox PH models, we imputed the missing data using the MissForest imputation algorithm (with 5 imputations).The Cox PH model that uses the traditional 7 risk factors used by Framingham score can be thought of as a variant of Framingham score calibrated to the UK population (the Framingham score was originally derived for a US population).For the Cox PH model that uses all of the 473 predictors, we applied variable selection using the LASSO method [26].(Variable selection was applied since fitting the Cox model with all variables resulted in an inferior performance due to the numerical collapse of the Cox model solvers in high dimensions.)To apply variable selection, we fit a LASSO regression model (a linear model penalized with the L1 norm) to predict the (binary) CVD outcomes.The fitted model gives a sparse solution whereby many of the estimated coefficients are zero.We select all the variables with non-zero coefficients in the fitted LASSO model and feed those variables into a Cox model fitted on the same batch of data.We optimize the LASSO model regularization parameter via cross-validation.
Standard ML models.We considered 5 standard ML benchmarks that cover different classes of ML modeling approaches.The models under consideration are: linear support vector machines (SVM) [27] (a linear classifier), random forest [28] (a tree-based ensemble method), neural networks [29] (a deep learning method), AdaBoost [30] and gradient boosting machines [31] (boosting ensemble methods).(We also attempted to fit a kernel SVM, but fitting such a model was computationally infeasible for the UK Biobank cohort because it entails a cubic complexity in the number of datapoints.)The purpose of including those models in our experimental evaluations is to ensure that AutoPrognosis has automatically selected and tuned the best possible ML model, and that no individually-tuned ML model performed better than the model selected by AutoPrognosis.(We decided to include a Gradient boosting model in retrospect because it was assigned the largest weight in the ensemble formed by AutoPrognosis.)We implemented all these models using the Scikit-learn library in Python programming language [32].The models' hyper-parameters were determined via grid search.Data imputation for all models was conducted using the MissForest algorithm (with 5 imputed datasets).(We have attempted other imputation algorithms, such as multiple imputation by chained equations, but MissForest provided a better predictive performance.)
Model development using AutoPrognosis
We developed an ML-based model for CVD risk prediction using AutoPrognosis, an algorithmic framework for automating the design of ML-based clinical prognostic models [19].A schematic for the AutoPrognosis framework is provided in Fig 1 .Given the participants' variables and CVD outcomes, AutoPrognosis uses an advanced Bayesian optimization technique [33,34] in order to (automatically) design a prognostic model made out of a weighted ensemble of ML pipelines.Each ML pipeline comprises design choices for data imputation, feature processing, classification and calibration algorithms (and their hyper-parameters).(Calibration means that the numerical outputs of a model correspond to the actual risk of a CVD event.That is, an output prediction of 20% means that the patient's 5-year risk of a CVD event is 20%.)The design space of AutoPrognosis contains 5,460 possible ML pipelines (7 possible imputation algorithms, 9 feature processing algorithms, 20 classification algorithms, and 3 calibration methods).The list of algorithms that constitute the design space of AutoPrognosis is provided in Table 1.A detailed technical and methodological description of AutoPrognosis can be found in our previous work in [19].
To train our model, we set AutoPrognosis to conduct 200 iterations of the Bayesian optimization procedure in [19], where in each iteration the algorithm explores a new ML pipeline and tunes its hyper-parameters.Cross-validation was used in every iteration to evaluate the performance of the pipeline under evaluation.The (in-sample) model learned by AutoPrognosis combined 200 weighted ML pipelines, the strongest of which comprised the MissForest data imputation algorithm, no feature processing steps, an XGBoost ensemble classifier (with 200 estimators) [35], and sigmoid regression for calibration.Details of the model learned by AutoPrognosis is provided in the supporting information (S1 Appendix).In the Results Section, we will directly refer to our model as "AutoPrognosis".
Variable ranking
In order to identify the relative importance of the 473 variables used to build our model, we use a post-hoc approach to rank the contribution of the different variables in the predictions issued by the model.The ranking is obtained by fitting a random forest model with the participants' variables as the inputs, and the predictions of our model as the outputs, and then assigning variable importance scores to the different variables using the standard permutation method in [36].Using the permutation method, we assess the mean decrease in classification accuracy for every variable after permuting that variable over all trees.The resulting variable importance scores reflect the impact each variable has on the predictions issued by AutoPrognosis.We used the random forest algorithm for post-hoc variable ranking because it is a nonparametric algorithm that can recognize complex patterns of variable interaction while enabling principled evaluation of variable importance [36].Other variable ranking methods based on associative classifiers (such as the one proposed in [19]) entail a computational complexity that is exponential in the number of variables, and hence are not suitable for our study as it involves more than 400 variables.
To disentangle the "modeling gain" achieved by utilizing ML-based techniques from the "information gain" achieved by just using more variables, we created a simpler version of AutoPrognosis that only uses the same 7 core risk factors (age, gender, systolic blood pressure, smoking status, treatment of hypertension, history of diabetes, and BMI) used by the existing prediction algorithms.In addition, we created another version of the AutoPrognosis model that uses only non-laboratory variables in UK Biobank.
Statistical analysis
In order to avoid over-fitting, we evaluated the prediction accuracy of all models under consideration via 10-fold stratified cross-validation using area under the receiver operating characteristic curve (AUC-ROC).In every cross-validation fold, a training sample (381,244 participants) was used to derive the Cox PH models, standard ML models, and our model (AutoPrognosis), and then a held-out sample (42,360 participants) was used for performance evaluation.We report the mean AUC-ROC and the 95% confidence intervals (Wilson score intervals) for all models.The calibration performance of our model was evaluated via the Brier score.
Characteristics of the study population
A total of 423,604 participants had sufficient information for inclusion in this analysis.Overall, the mean (SD) age of participants at baseline was 56.4 (8.1) years, and 188,577 participants (44.5%) were male.Over a median follow-up of 7 years (5th-95th percentile: 5.7-8.4years; 3 million person-years at risk), there were 6,703 CVD cases.The mean age of CVD cases was 60.5 years (60.2 years for men and 61.1 years for women).Because the minimum follow-up period for all participants was 5 years, we evaluated the accuracy of the different models in predicting the 5-year risk of CVD.At a 5-year horizon, the total number of CVD cases was 4,801.
Prediction accuracy
Comparison of prediction models.The prediction accuracy of the different models under consideration evaluated at a 5-year horizon is shown in Table 2.We used the Framingham score as a baseline model for performance evaluation (AUC-ROC: 0.724, 95% CI: 0.720-0.728).Both the Cox PH model with the 7 conventional risk factors (AUC-ROC: 0.734, 95% CI: 0.729-0.739),and the Cox PH model with all variables (AUC-ROC: 0.758, 95% CI: 0.753-0.763)achieved an improvement in the AUC-ROC compared to the baseline model (p < 0.001).The improvement achieved by the Cox PH model that uses the same predictors used by the Framingham score is due in part to the fact that the Cox PH model is directly derived from the analysis cohort, whereas the Framingham score coefficients were derived from a different population.
With the exception of support vector machines, all the standard ML models achieved statistically significant improvements compared to the baseline Framingham score.Furthermore, when compared to the Cox PH model that uses all variables, neural networks, AdaBoost, gradient boosting, and AutoPrognosis all achieved a significantly higher AUC-ROC.AutoPrognosis achieved a higher AUC-ROC compared to all other standard ML models (AUC-ROC: 0.774, 95% CI: 0.768-0.780,p < 0.001), which suggests that the automated ML system managed to automatically select and tune the "right" ML model.(The AutoPrognosis model trained on all variables was also well-calibrated, with an in-sample Brier score of 0.0121.)Compared to the most competitive benchmark (the Cox PH model that uses all of the variables), the net re- Most of the variables in the UK Biobank are non-laboratory variables collected through an automated touchscreen questionnaire about lifestyle, clinical history and nutritional habits.We evaluated the accuracy of AutoPrognosis once when it is trained with 369 variables corresponding to the participants' self-reported information (questionnaires) only, and once when it is trained with 104 variables obtained from blood assays, diagnostic tests, and physiological measurements.As we can see in Table 2, AutoPrognosis with only questionnaire-related variables still achieves a significant improvement over the baseline Framingham score (AUC-ROC: 0.752, 95% CI: 0.747-0.757,p < 0.001), and is superior to the model that only uses laboratory-based variables.
Classification analysis.In order to better assess the clinical significance of our results, we compared the AutoPrognosis model with the traditional Framingham score in predicting 7.5% CVD risk (threshold for initiating lipid-lowering therapies recommended by the NICE guidelines [10]).At this operating point, the Framingham baseline model predicted 2,989 CVD cases correctly from 4,801 total cases, resulting in a sensitivity of 62.2% and PPV of 1.5%.Our AutoPrognosis model correctly predicted 3,357 out of the 4,801 CVD cases, resulting in a sensitivity of 69.9% and PPV of 2.6%.This corresponds to 368 net increase in the number of CVD patients who would benefit from receiving a preventive treatment in a timely manner when utilizing the predictions of our model.scores).Variables related to physical activity (usual walking pace) and information on blood measurements appeared to be more important for the predictions of AutoPrognosis than traditional risk factors included in most existing scoring systems.For women, a remarkable predictor of CVD risk was the measured "ankle spacing width".This may be linked to symptoms of poor circulation, such as swollen legs, which is predictive of future CVD events [37].We also found that usage of hormone-replacement therapy (HRT) was on the list of top predictors of CVD risk for women.For men, blood measurements such as haematocrit percentage and haemoglobin concentration, and variables such as urinary sodium concentration were among the most important risk factors.
Variable importance. Table 3 lists the 20 most important variables ranked according to their contribution to the predictions of the AutoPrognosis model (along with their importance
Prediction accuracy in individuals with history of diabetes.Among the 423,604 participants included in our cohort, a total of 17,908 participants (4.22%) had a known history of diabetes (either Type 1 or Type 2) at baseline.In Table 4, we show the AUC-ROC performance of AutoPrognosis and the baseline Framingham score when validated separately on the diabetic and non-diabetic populations.As we can see, the baseline Framingham score was less accurate in the diabetic population (AUC-ROC: 0.578, 95% CI: 0.560-0.596)compared to its achieved accuracy for the overall population (AUC-ROC: 0.724, 95% CI: 0.720-0.728,p < 0.001).On the contrary, AutoPrognosis maintained high predictive accuracy for the diabetic population (AUC-ROC: 0.713, 95% CI: 0.703-0.723).
The variable ranking for the diabetic sub-population is provided in Table 5.We note that the list of important variables in the diabetic subgroup is substantially different from that of the overall population.One major difference is that for diabetic patients, microalbuminuria appeared to be strongly linked to an elevated CVD risk.In the overall population (423,604 participants), the average measure of microalbumin in urine was 27.8 mg/L for participants with no CVD events, and 52.2 mg/L for participants with CVD events.In the diabetic population (17,908 participants), participants with no CVD events had an average microalbumin in urine of 61.0 mg/L, whereas for those with a CVD event, the average microalbumin in urine was 128.76 mg/L.(Information on microalbumin in urine was available for 30% of the patients in the overall population, and 50% of patients in the diabetic population.)Predictive ability of individual variables in UK Biobank.In order to evaluate the individual predictive ability of the UK Biobank variables, we exhaustively fitted simple versions of our AutoPrognosis model for each of the 473 variables.For each such model, we use one distinct variable as an input and evaluate the resulting AUC-ROC.Because most variables are correlated with age and gender, we use the age variable as a second predictor for all models, and fit separate models for men and women.The AUC-ROC values of the resulting models are depicted in the scatter-plot in Fig 2 .As shown in Fig 2, variables related to smoking habits or exposure to tobacco smoke displayed the highest predictive ability.Self-reported health rating was predictive for both Predictive ability of the UK Biobank variables for men and women.Each point represents a variable in the UK Biobank ordered by the ability to predict CVD events for men and women.Predictions based solely on age achieved an AUC-ROC of 0.632 ± 0.003 for men and 0.665 ± 0.002 for women.We report the AUC-ROC from models trained with individual variables in addition to age, and only display variables that achieved a statistically significant improvement in AUC-ROC compared to predictions based on age only.Each color represents a different variable category.Variables deviating from the (dotted gray) regression line have an AUC-ROC that differs between men and women more than expected in view of the overall association between the two genders, suggesting a stronger relative importance in one gender group.https://doi.org/10.1371/journal.pone.0213653.g002 genders, but more predictive for women.Existence of long-standing illness was strongly predictive of CVD events for women, and less predictive for men.Variables extracted from the electrocardiogram (ECG) records possessed stronger predictive ability for men.
Discussion
In this large prospective cohort study, we developed a ML model based on the AutoPrognosis framework for predicting CVD events in asymptomatic individuals.The model was built using data for more than 400,000 UK Biobank participants, with over 450 variables for each participant.Our study conveys several key messages.First, AutoPrognosis significantly improved the accuracy of CVD risk prediction compared to well-established scoring systems based on conventional risk factors and currently recommended by primary prevention guidelines (Framingham score).Second, AutoPrognosis was able to agnostically discover new predictors of CVD risk.Among the discovered predictors were non-laboratory variables that can be collected relatively easily via questionnaires, such as the individuals' self-reported health ratings and usual walking pace.Third, AutoPrognosis uncovered complex interaction effects between different characteristics of an individual, which led to recognition of risk predictors that are specific to certain sub-populations for whom existing guidelines were providing unreliable predictions.
When can ML help in prognostic modeling?
The abundance of a large number of informative variables in the UK Biobank (473 variables) guarantees an "information gain" that can be achieved by any data-driven model, including the standard Cox PH model, compared to the existing prediction algorithms that use only a limited number of conventional risk factors (e.g., Framingham score).The results in Table 2 show that, in addition to the information gain, AutoPrognosis also attained a "modeling gain" that allowed it to outperform the standard Cox PH model that uses all of the 473 variables.In general, the modeling gain achieved by AutoPrognosis would result from its ability to select among different models with various levels of complexity and numerical robustness in a completely data-driven fashion, without committing to any presupposition about the superiority of any given model.In our experiments, the Cox PH supplied with all of the 473 variables (without variable selection) provided a noticeably poor performance (i.e., an average AUC-ROC of 0.6).This is because the numerical solvers of the Cox PH model collapse when the data dimensionality is very large-this is why a variable selection pre-processing step was essential for fitting the Cox PH model.This implies that, even if the true underlying data model is perfectly linear, fitting standard linear models such as Cox PH or linear regression may not be sufficient for harnessing the information gain, since such models are not numerical robust in high-dimensional settings.AutoPrognosis solves this problem by selecting more robust models that better fit the high-dimensional data-in our experiments, these where treebased models such as XGBoost and random forests.This observation shows that information gain and modeling gain are inherently entangled: to harness the information gain, we need to consider a more complex modeling space.
While the information gain appeared to be more significant than the modeling gain in our experiments, we note that even when provided with the same 7 core risk factors used by the Framingham score, AutoPrognosis was still able to offer a statistically significant AUC-ROC gain compared to the Framingham score and a Cox PH model that uses the same 7 variables.This shows that the modeling gain is not necessarily limited to settings where many predictors are available and numerical robustness, but is rather achievable whenever a small number of predictors display complex interactions.
Because not every ML model would necessarily improve over the Framingham score or the simple Cox PH model, our usage of the AutoPrognosis algorithm was essential for realizing the full benefits of ML modeling.As the results in Table 2 demonstrate, some ML models did not improve over the baseline Framingham score, whereas others provided modest improvements.This is because selection of the right ML model and careful tuning for the model's hyper-parameters are two crucial steps for realizing the potential benefits of ML.AutoPrognosis automates those steps, which makes ML application easily accessible for mainstream clinical research.The importance of model selection and hyper-parameter optimization have been overlooked in previous clinical studies that applied ML in prognostic modeling [16][17][18].Our study is unique in that, to the best of our knowledge, it is the first to carry out a comprehensive investigation of the performance of ML models in a large cohort with such an extensive number of predictors.
Risk prediction with non-laboratory variables
Individuals in developed countries tend to seek out health information through online resources and web-based risk calculators [38].In developing countries, where 80% of all world-wide CVD deaths occur [39], there are limited resources for risk assessment strategies that require laboratory testing [39,40].The results in Table 2 show that AutoPrognosis could potentially provide reliable risk predictions by using information from non-laboratory variables about the participants' lifestyle and medical history.The most predictive non-laboratory variables included in our model were ages, gender, smoking status, usual walking pace, selfreported overall health rating, previous diagnoses of high blood pressure, income, Townsend index and parents' ages at death.Inclusion of such variables in web-based risk calculators can help provide reasonably accurate risk predictions when obtaining laboratory variables is not viable.
One remarkable finding in Table 2 (and Fig 2) is that apart from the well-established age and gender risk factors, two other non-laboratory variables were found to be very predictive of the CVD outcomes; those are the "self-reported health rating", and the "usual walking pace".(Both variables were also found to be predictive of the overall mortality risk in a recent study on the UK Biobank [22].)Neither of the two variables is included in any of the existing risk prediction tools.Walking pace was equally predictive for men and women, but the selfreported health rating was more predictive for women and less for men.This may be explained by either gender-specific reporting bias or true clinical differences.Therefore, prediction tools that would include subjective non-laboratory variables, such as the self-reported health rating, should be carefully designed in such a way that self-reporting bias is reduced.
Risk predictors specific to diabetic patients
Unlike the Framingham score, AutoPrognosis was able to maintain high predictive accuracy for participants diagnosed with diabetes at baseline (Table 4).This suggests that the AutoPrognosis model has learned diabetes-specific risk factors that were not previously captured by the existing prediction algorithms.By investigating the risk factor ranking within the diabetic subgroup (Table 5), we found that urinary microalbumin (measured in mg/L) is a very strong marker for increased CVD risk among individuals with diabetes.The dismissal of urinary microalbumin in existing risk scoring systems may explain their poor prognostic performance when validated in cohorts of diabetic patients [12,13].Our results indicate that predictions based on AutoPrognosis can provide better guidance for CVD preventive care in diabetic patients.
It is worth mentioning that the microalbumin in urine measures were available for only 125,406 participants in the overall cohort (29.6%).In a standard prognostic study, such a variable may get omitted from the analysis because of its high missingness rate.AutoPrognosis automatically recognized that this variable is relevant for diabetic patients, and hence did not omit it in its feature processing stage.
Limitations
The main limitation of our study is the absence of the cholesterol biomarkers (total cholesterol, HDL cholesterol and LDL cholesterol) from the latest release of the UK Biobank data repository, which hindered direct comparisons with the QRISK2 scores currently recommended by the NICE guidelines.Furthermore, other blood-based biomarkers have been reported to be associated with CVD risk, but were also not yet released in the UK Biobank data repository, such as triglycerides [41], measures of glycemia [42], markers of inflammation [43], and and natriuretic peptides [44].Inclusion of such predictors could improve the predictive accuracy of all models tested in this study, and could also alter the risk predictors' ranking in Table 2, but is unlikely to change our conclusions on the usefulness of ML modeling in CVD risk prediction.
Another limitation of our study is that the UK Biobank cohort is ethnically homogeneous: 94% of the participants were of white ethnicity.Hence, assessment of the importance of ethnicity as a predictor of CVD events and the recognition of ethnicity-specific risk predictors was not possible in our study.
Fig 1 .
Fig 1.An illustrative schematic for AutoPrognosis.In this depiction, AutoPrognosis constructs an ensemble of three ML pipelines.Pipeline 1 uses the MissForest algorithm to impute missing data, and then compresses the data into a lower-dimensional space using the principal component analysis (PCA) algorithm, before using the random forest algorithm to issue predictions.Pipelines 2 and 3 use different algorithms for imputation, feature processing, classification and calibration.AutoPrognosis uses the algorithm in [19] to make decisions on what pipelines to select and how to tune the pipelines' parameters.https://doi.org/10.1371/journal.pone.0213653.g001
Fig 2 .
Fig 2.Predictive ability of the UK Biobank variables for men and women.Each point represents a variable in the UK Biobank ordered by the ability to predict CVD events for men and women.Predictions based solely on age achieved an AUC-ROC of 0.632 ± 0.003 for men and 0.665 ± 0.002 for women.We report the AUC-ROC from models trained with individual variables in addition to age, and only display variables that achieved a statistically significant improvement in AUC-ROC compared to predictions based on age only.Each color represents a different variable category.Variables deviating from the (dotted gray) regression line have an AUC-ROC that differs between men and women more than expected in view of the overall association between the two genders, suggesting a stronger relative importance in one gender group.
Table 2 . Performance of all prediction models under consideration. Model AUC-ROC Absolute AUC-ROC Change
NRI) was +12.5% in favor of AutoPrognosis.AutoPrognosis trained only with the 7 conventional risk factors still outperformed the baseline Framingham score (p < 0.001).
Table 3 . Variable ranking by their contribution to the predictions of AutoPrognosis.
� Risk factors utilized by existing risk prediction algorithms.Explanations for the different variables in this table are provided in S2 Appendix.https://doi.org/10.1371/journal.pone.0213653.t003 | 2019-05-17T13:08:36.968Z | 2019-05-15T00:00:00.000 | {
"year": 2019,
"sha1": "4d7c166133f493fd4299f4e43cb0fbfd03948248",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0213653&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4d7c166133f493fd4299f4e43cb0fbfd03948248",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261185985 | pes2o/s2orc | v3-fos-license | Chemotrophy-based phosphatic microstromatolites from the Mississippian at Drewer, Rhenish Massif, Germany
Abstract The Drewer quarry located in the Rhenish Massif is a well-studied outcrop that comprises Upper Devonian (Famennian) to Lower Carboniferous (Viséan) strata. Within the Drewer deposits two black shale intervals have been described that are linked to two global oceanic anoxic events, the Hangenberg Event and the Lower Alum Shale Event. The black shales associated with the Middle Tournaisian Lower Alum Shale Event contain abundant phosphatic concretions, which were investigated using thin section petrography, powder X-ray diffraction, Fourier-transform infrared spectrometry and scanning electron microscopy. The concretions formed during several growth phases under anoxic and at least episodically sulphidic conditions within the sediment and served as a substrate for subsurface microbial mats that formed phosphatic microstromatolites. The microstromatolites occur either as partially branched columns of up to 600 µm in length attached to the phosphatic concretions or as smaller, bulbous aggregates surrounding the concretions. Element mapping identified the presence of pyrite and other metal sulphides within the phosphatic microstromatolites. The carbon and oxygen stable isotopic composition of phosphate-associated carbonate within the phosphatic microstromatolites suggests that the mat-forming microorganisms were probably anaerobic, chemotrophic microbial communities dwelling in the anoxic environment during the Lower Alum Shale Event. Such interpretation agrees with the deeper-water depositional setting of the Lower Alum Black Shale and its high content of organic matter, suggesting that chemotrophic microbial mats are potent agents of phosphogenesis in general, and of the formation of phosphatic stromatolites in particular.
Introduction
Black shales are organic-rich sediments that commonly formed coevally in space and time through Earth history and have been associated with perturbations in the carbon cycle, climate change and global extinction events (Schlanger & Jenkyns, 1976;Arthur & Sagemann, 1994;Sagemann et al. 2003;Jenkyns, 2010).They are characterized by rapid and efficient accumulation of organic matter in marine sediments, either by increased primary production or efficient preservation.Black shales commonly contain phosphorus-rich rocks referred to as phosphorites, which are defined by P 2 O 5 contents of 18 wt.%or higher (Föllmi, 1996).Phosphorites and black shales require similar formation conditions and are therefore commonly encountered together in the rock record.Phosphogenesis, which describes the precipitation of authigenic phosphorus minerals in marine sediments, occurs within a specific environmental spectrum where ocean circulation, sedimentation and the preservation of organic matter during early diagenesis allow for phosphorus to accumulate sufficiently in sedimentary pore waters (Glenn et al. 1994;Föllmi, 1996;Benitez-Nelson, 2000;März et al. 2008;Küster-Heins et al. 2010).In the modern ocean, such conditions exist in suboxic to anoxic marine sediments typified by a constant supply of organic matter, as observed in coastal upwelling zones, continental margin sediments or restricted marine basins (Filipelli & Delaney, 1996;Schenau et al. 2000;Filipelli, 2011;Lomnitz et al. 2016).Because phosphorus is highly mobile and is cycled efficiently during early diagenesis, persistent anoxic conditions are required to allow for phosphorus minerals to precipitate (Ruttenberg & Berner, 1993;Ingall & Jahnke, 1994;Föllmi, 1996;Kraal et al. 2012).Phosphogenesis typically requires multiple phosphorus sources including the anaerobic microbial degradation of organic matter, the reductive dissolution of iron oxides releasing adsorbed phosphate, as well as the degradation and dissolution of bone material and fish debris (Jensen et al. 1995;Schenau et al. 2000;Smith et al. 2015).Since phosphorus minerals such as carbonate fluorapatite are common in organic-rich black shales, these deposits are regarded as type sections for phosphogenesis that hold important clues on the driving processes enabling the formation of sedimentary phosphate minerals (Filippelli, 2011).
Phosphatic stromatolites have been found in Proterozoic and Phanerozoic carbonate, phosphorite and black shale lithologies (Krajewski et al. 2000;Caird et al. 2017;Sallstedt et al. 2018;Zoss et al. 2019).Stromatolites are lithified microbial build-ups that are the result of the interaction between various microbial metabolic processes and their sedimentary environment (Dupraz & Visscher, 2005;Allwood et al. 2007;Sallstedt et al. 2018).Stromatolites in the fossil record have commonly been interpreted as products of cyanobacteria due to their remarkable similarity to modern cyanobacterial mats (Stal, 2012).A cyanobacterial origin has also been suggested for phosphatic stromatolites, occurring in shallow marine settings due to the presence of preserved oxygen gas bubbles (Bosak et al. 2009;Sallstedt et al. 2018), laminated fabrics related to trapping and binding mechanisms, stable isotope analyses, mineralogy, as well as facies analyses of the host sediments (Rao et al. 2000(Rao et al. , 2002;;Lundberg & McFarlane, 2011;Drummond et al. 2015;Caird et al. 2017;Sallstedt et al. 2018).Modern examples of such cyanobacterial phosphatic stromatolites are scarce and have only recently been described from a low-phosphorus terrestrial environment (Büttner et al. 2021).In contrast to these photosynthesis-based cyanobacterial stromatolites, phosphogenesis can result from organic matter degradation by anaerobic, sulphate-reducing bacteria (Thamdrup & Canfield, 1996;Benitez-Nelson, 2000;Arning et al. 2009a;Berndmeyer et al. 2012).A further relationship between bacteria involved in the sulphur cycle and phosphogenesis has been suggested for various ancient and modern phosphorite and phosphorus-rich deposits (Williams & Reimers, 1983;Schulz & Schulz, 2005;Bailey et al. 2007Bailey et al. , 2013;;Arning et al. 2009a;Cosmidis et al. 2013;Zwicker et al. 2021), in particular with regard to the large, colourless chemotrophic sulphide-oxidizing bacteria.These bacteria take up and release phosphate into the pore waters during early diagenesis and are capable of storing polyphosphate within their cells (Schulz & Schulz, 2005;Sievert et al. 2007;Zopfi et al. 2008;Goldhammer et al. 2010).
Here we report chemotrophy-based phosphatic microstromatolites enclosed in black shales deposited after the Devonian-Carboniferous transition in the course of the globally coeval, transgressive Lower Alum Shale Event (Sobolev et al. 2000;Siegmund et al. 2002;Kaiser et al. 2011;Becker et al. 2016Becker et al. , 2021)).These black shales are rich in phosphorous concretions that host phosphatic microstromatolites of various size and morphology.Using a comprehensive approach combining petrography, mineralogy and isotope geochemistry, it is suggested that the stromatolite-forming microbial communities thrived in deepwater environments and were independent of sunlight but relied on chemotrophic metabolic pathways.
2.a. Tectonic setting
The Rhenohercynian Basin is part of the Avalonian Plate and was related to numerous active subduction zones associated to the closing Rheic Ocean (Oncken et al. 2000).The Rhenohercynian fold and thrust belt, which comprises the Rhenish Massif, are part of the Middle European Variscides that occur as well exposed and complete Middle Devonian to Lower Carboniferous successions (Oncken et al. 1999).The Rhenohercynian Zone has been characterized as an evolving rift system developing on Upper Devonian to Lower Carboniferous subsiding shelf sediments of the Old Red Continent (von Raumer et al. 2017).This zone, along with the Saxothuringian and the Moldanubian zones, divides the Central European Variscides from the northwest to the southeast (Kossmat, 1927;Brinkmann, 1948).The Rhenish Massif has been associated with the Avalonian Terrain that separated from Gondwana in the Early Ordovician (Oncken et al. 2000;Eckelmann et al. 2014).The resulting Rheic Ocean began to close in the Early Devonian up to the Carboniferous and involved the formation of several microplates separating the closing ocean from the Paleotethys and the Rhenohercynian Ocean by island arcs, which accreted to Avalonia in the Early Devonian (Oncken & Weber, 1995).The opening of the Rhenohercynian Ocean has been related to an active Laurussian continental margin that separated the Avalonian terranes (von Raumer & Stampfli, 2008;Zeh & Gerdes, 2010).These island arcs are documented by rocks of the Mid-German Rise, or 'Mid-German Crystalline Zone' (Altenberger et al. 1990;Dombrowski et al. 1995), which separates the Rhenohercynian from the Saxothuringian zones (Zeh & Gerdes, 2010).
The Rhenohercynian Basin developed as an elongated, narrow trough between the Mid-German Rise to the south and the London-Brabant High in the north as part of the Old Red Continent during the Middle Devonian (Königshof et al. 2016).The Upper Devonian sedimentary rocks of the Rhenohercynian Basin are part of the Hercynian Facies that comprise siltstones and red shales, reef carbonates and bioherms and drowned carbonate platforms (Becker et al. 2016).The Late Devonian reefs were subject to several global extinction events such as the Kellwasser crisis in the latest Frasnian and the Hangenberg Event at the Devonian-Carboniferous boundary (Becker et al. 2016).Deposits of the lowermost Carboniferous in the Rhenish Massif comprise cherts, organic-rich black shales and turbiditic limestones, which have been collectively associated with the 'Kulm-Facies'.In the Early Carboniferous, the Rhenohercynian Basin was successively closed and subducted under the Mid-German High, forming an active continental margin as evidenced by extensive local volcanism (Oncken & Weber, 1995;Siegmund et al. 2002).
2.b. Upper Devonian to Mississippian strata at Drewer
The Drewer quarry represents the northernmost locality of the Rhenish Massif cropping out within several anticlinal and synclinal structures (Becker et al. 2016), which are part of a larger thrust and fold belt with numerous synclinal and anticlinal systems from the Velbert and Remscheid-Altena anticlines to the west and the larger Warstein anticline to the south (Fig. 1).Drewer is located within the Belecke anticline, which formed an intrabasinal swell in a starved basin during the Early Carboniferous (Clausen et al. 1989;Korn, 2010).It represents a key lithostratigraphic and biostratigraphic locality for the Devonian-Carboniferous boundary displaying Milankovich cyclicity, several regressions and transgressions, as well as extinction events as recorded by the Hangenberg and Lower Alum Shale Events (Becker, 1992(Becker, , 1999;;Korn et al. 1994;Siegmund et al. 2002;Kumpan et al. 2015;Becker et al. 2016).These two black shale intervals have been correlated globally to coeval strata in Spain, France and Poland, as well as to Hangenberg Event strata from India (Ganai & Rashid, 2019), Vietnam (Komatsu et al. 2014), China (Zhang et al. 2019), Morocco (Kaiser et al. 2007), Italy (Spalletta et al. 2021) and the USA (Lu et al. 2019;Martinez et al. 2019;Barnes et al. 2020).Facies changes are recorded at Drewer starting with the latest Famennian nodular Wocklum Limestone and the calcareous, laminated Drewer Sandstone (Luppold et al. 1994), followed by the Hangenberg Black Shale and intercalated Hangenberg Sandstone (Clausen et al. 1989).The Devonian-Carboniferous boundary at Drewer is represented by the Stockum Limestone composed of alternating limestones and marls, followed by the Lower Tournaisian Hangenberg Limestone characterized as typical cephalopod limestone (Becker et al. 2016).A sharp transition to phosphorus-rich black shales associated with the global Lower Alum Shale Event (cf.Becker, 1992) represents a carbonate production and ecosystem collapse during a maximum flooding event (Korn, 2010;Becker et al. 2016).The abrupt transition from cephalopod-rich limestones to black shales marks the lower boundary of the German Kulm facies postdating the Hangenberg Event and the Devonian-Carboniferous boundary.
The uppermost strata at Drewer features grey, crinoidal limestones, which have been correlated to the Erdbach Limestone of the southern Rhenish Massif (Becker et al. 2016).
Material and methods
The northwestern face of the Drewer quarry was logged over an 8meter section, producing a detailed profile of the Lower Alum Black Shale facies.Thin sections of phosphorus-rich black shales were prepared.For bulk rock powder X-ray diffraction (XRD) analysis, samples were crushed to fine powder in an agate mortar.XRD analysis of carbonate-cemented background sediment was performed on a powder sample obtained with a handheld microdrill from a polished slab.XRD measurements were carried out at the Crystallography Department (University of Bremen), using a Philips X'Pert Pro MD X-ray diffractometer with a Cu-Kα tube (λ = 1.541; 45 kV, 40 mA).Scanning electron microscopy (SEM) was conducted on a conventional tungsten filament SEM (FEI Inspect S) and a field-emission-gun scanning electron microscope with integrated focused ion beam (FEI QuantaTM 3D FEG) and an energy-dispersive X-ray detection unit (EDAX Apollo XV) at the Institute for Geology of the University of Vienna.Data processing was conducted using the EDAX TEAM TM V3.1.1 software.The presence of pyrite was determined using a Cameca SX-100 electron microprobe at the Faculty of Geosciences, University of Bremen.Analytical conditions included an acceleration voltage of 20 kV, beam current of 10 nA and a defocussed beam of 1-2 μm diameter.Counting times were 20 seconds on peak and 10 seconds on background.For quantification natural minerals from the collections of the University of Vienna and the Smithsonian Institution were used (Jarosewich et al. 1980) and the built-in PAP matrix correction.
Fourier-transform infrared (FTIR) spectra were acquired from 370 to 4,000 cm −1 on a Bruker Tensor 27 FTIR spectrometer equipped with a glo(w)bar MIR light source, a KBr beam splitter and a DLaTGS detector at the Department of Mineralogy and Crystallography of the University of Vienna.A polished thin section was pressed on the 2 × 3 mm diamond window of a Harrick MVP 2 diamond attenuated total reflectance (ATR) accessory in such a way that either the apatite or the calcite components were predominantly probed.For comparison, spectra of blackboard chalk and a pure fluorapatite crystal from Durango, Mexico, were acquired.Sample and background spectra were averaged from 32 scans at 4 cm −1 spectral resolution.Background spectra were obtained from the empty ATR unit.Data handling was performed with OPUS 5.5 software (Bruker Optik GmbH, 2005).
Sample powders for carbon and oxygen stable isotope analysis of phosphate-associated carbonate (PAC) of carbonate fluorapatite were obtained from polished rock slabs using a handheld microdrill.A total of 14 samples were prepared; four from phosphatic microstromatolites, five from the phosphatic concretions and five samples from the host rock surrounding the concretions.Stable isotope analyses were conducted at IOW using a Thermo Scientific Gasbench II connected to a Thermo Finnigan MAT 253 gas mass spectrometer via a Thermo Scientific Conflo IV split interface following Böttcher et al. (2018).It was assumed that phosphoric acid reaction with apatite and calcite to release carbon dioxide is associated with the same kinetic 18 O/ 16 O fractionation effect (Kolodny & Kaplan, 1970;Loeffler et al. 2019).Isotope values given in '‰' are equivalent to 'mUr' (milli-Urey; Brand & Coplen, 2012).The carbon and oxygen isotope data are given with respect to the V-PDB standard with a reproducibility of better than ±0.10 mUr and ±0.15 mUr, respectively.Due to phase-specific sampling, the determined δ 13 C values from the phosphatic microstromatolites represent almost pure apatite-bound inorganic carbon.
4.a. Lithostratigraphy of the Devonian-Carboniferous boundary at Drewer
The Devonian-Carboniferous boundary section at Drewer exposes the Wocklum Limestone, Hangenberg Black Shale and Sandstone, Hangenberg Limestone, Lower Alum Black Shale, equivalents of the Erdbach Limestone and the siliceous Kulm siltstones (Fig. 2; cf.Korn et al. 1994;Becker et al. 2016).The Wocklum Limestone is grey to dark grey, nodular, partly sparitic and is well-bedded in the upper part with a sharp transition to the Hangenberg Black Shale, which is composed of darker grey to black, slaty to flaky bands of shale that are partly unconsolidated.The overlying Hangenberg Sandstone is fine grained and well sorted, with a gradational contact to the Hangenberg Limestone above.The transitional section of the lower part of the Hangenberg Limestone comprises fine-grained siltstone with several thin nodular limestone beds, whereas the main limestone is grey and massive.The shift to the Lower Alum Black Shale is marked by thin siltstone and limestone beds, which are 10-15 cm thick (Fig. 2).The Lower Alum Black Shale measures approximately 130 cm in thickness and is described in detail below.The grey to dark grey, massive, mostly sparitic Erdbach Limestone above the Lower Alum Black Shale is approximately 80 cm thick, followed by the black, very finegrained, siliceous Kulm siltstone.
4.b. Lithostratigraphy of the Lower Alum Black Shale interval at Drewer
The detailed profile of the Lower Alum Black Shale (Fig. 3) begins with the uppermost parts of the Hangenberg Limestone, a micritic, bioclastic wackestone (cf.Becker et al. 2016) that grades into weakly carbonaceous siltstones representing the last phase of deposition before the Lower Alum Shale Event.The Lower Alum Black Shale interval at Drewer has been divided into nine units, which are designated as units 5a to 5i (Fig. 3).Unit 5a comprises 40 cm of dark grey clay with pyrite-bearing intervals and 0.5-10 mm thick beds.Unit 5b includes 30 cm of a similar lithology as unit 5a with clay, as well as first occurrences of isolated, tabular phosphatic concretions up to 2 cm in width and 0.5 cm in thickness.Unit 5c is similar in appearance with more individual tabular concretions, which occur more frequently in unit 5d.In this unit, various concretion morphologies from tabular to platy to nodular are present (Fig. 3).Unit 5e shows wavy lamination within the finely bedded shales, as well as cracks and fissures filled with weathered material.Unit 5f exhibits a clear transition from unit 5e marked by numerous horizontal, tabular and partly overlapping phosphorite beds (Figs. 3,4a).This bed is characterized by its greyish colour and more abundant concretions (Fig. 3).Units 5g and 5h are replete in nodular to oval concretions measuring up to 10 cm in width that occur in close proximity to each other (Fig. 3).Unit 5i is the uppermost Lower Alum Black Shale unit at Drewer and features wavy lamination and tabular, elongated and occasionally irregular and fragmented concretions orientated with their long axes parallel to bedding (Figs. 3,4).Some smaller concretions are spherical.This unit is capped by a sharp transition to the pyrite-bearing, dark grey, Erdbach limestone (Fig. 3).
4.c. Petrography, mineralogy and stable isotope geochemistry
Lower Alum Black Shale unit 5h was chosen for sampling and detailed investigation due to its high content of phosphatic concretions (Fig. 5).Two microfacies were identified in these samples referred to as microfacies 1 and 2 from hereon (cf.Fig. 5a).Microfacies 1 is composed of silty shales and mudrocks featuring a pelitic matrix with poorly sorted components that comprises around 10% of the total volume.The components are mostly angular, detrital quartz grains with subordinate occurrences of muscovite and minor pyrite.X-ray diffraction patterns further revealed the presence of minor dolomite and ferroan dolomite.Microfacies 2 contains poorly sorted, angular quartz grains with subordinate dolomite and muscovite, as well as phosphatic minerals and large phosphatic concretions (Fig. 5).Microfacies 2 contains radiolarians and subordinate conodonts, and the portion of rock fragments and quartz components is estimated at Chemotrophy-based phosphatic microstromatolites 20-30 vol.%.The phosphatic concretions occur as spherical, oval and elongated forms that are between 1 and 4 cm in size.The phosphatic concretions can be distinguished in two different types.The first type of these concretions is oval to spherical and exhibits well-defined contours at their margins.These concretions contain a matrix of cryptocrystalline carbonate fluorapatite, as well as poorly preserved radiolarians that occupy an estimated 5% of the concretion volume.The second type of concretions grew around the first type of concretions exhibiting less well-defined contours at their margins (Fig. 5) and may reach up to 8 cm in diameter.The texture, fabric and mineral content of these concretions are similar to that of microfacies 2, but they contain more carbonate fluorapatite compared to quartz.
Phosphatic microstromatolites occur in two morphologies.The first type are aggregates of thin columns showing well-defined lamination (Fig. 6a, b), which measure up to 250 μm in diameter and reach 600 μm in length.Some columns are shorter and show a cauliflower-like morphology (Fig. 6b).Some of the microstromatolites of this type occur within the outer parts of phosphatic concretions (cf.Fig. 5d), where much of the space between individual columns is occupied by microcrystalline silica (Fig. 6a, b).The second type of microstromatolite corresponds to smaller aggregates within the outer concretions (Fig. 6c, d).This type of microstromatolite does not exhibit any columnar or branching morphologies, yet it shows well-defined lamination within bulbous aggregates.Both types of microstromatolites contain unevenly distributed, authigenic pyrite (Figs. 6, 7).Most common are columnar and branching microstromatolites attached to the outer rim of the inner concretion (cf.Fig. 7a, c, d), with some concretions completely encased by microstromatolites (Fig. 7a-d).The individual stromatolite columns show a fine lamination (Fig. 7e, f).
Scanning electron microscopy confirms that the microstromatolites within the phosphatic concretions are composed predominantly of apatite revealing a light grey appearance distinct from purely siliciclastic components in backscatter imaging (Fig. 8).The alternating fine laminae within the microstromatolites contain different amounts of alumosilicates and microcrystalline silica.The columnar stromatolites attached to the inner concretion show varying degrees of siliciclastics (Fig. 8a, b) and engulf radiolarian tests that consist of silica (Figs.8c, 9c).Discernible laminae are composed mainly of alumosilicates, whereas the bulk of the stromatolites is composed of phosphorus minerals (Fig. 9b-d).Pyrite aggregates are prominent in backscatter mode by their strong white reflection (Fig. 8c, d).Electron microprobe analyses revealed that authigenic pyrite occurs within the secondgeneration concretions and within phosphatic microstromatolites (Fig. 8c, d), yet no pyrite was detected in the first-generation concretions.Pyrite within the second-generation concretions shows iron and sulphur contents between 40.3 and 47.1 wt% and 44.5 and 54.1 wt%, respectively (compared to 47.0 and 47.4 wt% iron and 53.8 and 54.3 wt% sulphur in the standard material), with average contents of 44.7 wt% iron and 51.4 wt% sulphur.Pyrite contains accessory elements including arsenic (averaging 0.19 wt%), cobalt (averaging 0.19 wt%), nickel (averaging 0.63 wt%), copper (averaging 0.12 wt%) and zinc (averaging 0.03 wt%).Pyrite analysed within the phosphatic microstromatolites shows average iron and sulphur contents of 44.9 and 52.8 wt%, respectively.This pyrite contains accessory elements including arsenic (averaging 0.08 wt%), cobalt (averaging 0.09 wt%), nickel (averaging 0.9 wt%), copper (averaging 0.07 wt%) and zinc (averaging 0.01 wt%).
FTIR ATR spectra of the mineral composition of the outer concretion, inner concretion and the phosphatic microstromatolites (cf.Fig. 5d) were compared to spectra of pure carbonate-free fluorapatite from Durango, Mexico (Becker et al. 2016), a carbonate fluorapatite from Limburg an der Lahn, Germany (RRUFF data base R050529; ca.20% of the phosphate sites is substituted by carbonate; Downs, 2006;Lafuente et al. 2015) and a calcite reference acquired from a piece of blackboard chalk (Fig. 10).The wavenumber region from 400 to 1,600 cm −1 was chosen because it displays all fundamental vibrations of the phosphate and carbonate anion groups (White, 1974;Böttcher et al. 1997;Penel et al. 1997).Regarding the exact peak positions, it must be emphasized that strong vibrational bands in ATR spectra are slightly red-shifted to lower wavenumbers due to the impact of the complex refractive index instead of pure absorption (Harrick, 1967).Whereas pure, carbonate-free fluorapatite shows only the overlapping stretching bands ν 1 and ν 3 (ca.1,100 to 930 cm −1 ) and bending modes ν 2 and ν 4 (ca.630 to 540, 470 cm −1 ) of the phosphate groups, with a flat baseline above 1,100 cm −1 , the outer and inner concretion and the microstromatolites, as well as the carbonate fluorapatite from the RRUFF database shows additional characteristic bands at approximately 1,425 and 1,450 cm −1 (double-headed arrows in Fig. 10); the latter bands have been assigned to the anti-symmetric stretching vibrations ν 3 of the carbonate group replacing phosphate (substitution type B) in the apatite structure (Hofmann, 1997).The spectra of these three Drewer apatites also reveal minor vibrational bands (ν 2, ν 3, ν 4 ) caused by the carbonate group, positioned at about 1,400, 860 and 710 cm −1 , respectively (cf.White, 1974;Böttcher et al. 1997).These results confirm the presence of carbonate groups in the structure of apatite from the Drewer concretions and microstromatolites.
The carbon stable isotope compositions of carbonate fluorapatite show some differences between the phosphatic microstromatolites, the phosphatic concretions and the host rock.δ 13 C values of phosphatic microstromatolites are lowest ranging between −3.3 and −2.2‰, whereas those of phosphatic concretions are less negative between −1.7 and −0.8‰ (Fig. 11).
5.a. Phosphogenesis and concretion growth
The Lower Alum Black Shale represents a period of reduced carbonate and clastic sedimentation, as well as high degrees of eutrophication and marine organic productivity leading to anoxic conditions in the sediments and possibly in the bottom waters (Siegmund et al. 2002;Rakocinski et al. 2023).These conditions are similar to present-day upwelling zones, where the lack of oxygen in sediments and bottom waters facilitates the accumulation of phosphorus released during organic matter remineralization (Lomnitz et al. 2016).The Lower Alum Shale Event has been interpreted as a global anoxic event leading to a severe regional extinction of many invertebrate groups (Becker, 1992;Walliser, 1996;Kaiser et al. 2011) Phosphogenesis in black shales and other organic-rich deposits is commonly manifested in the form of laminated crusts, lithified hardgrounds and gravels, disseminated nodules and oval to spherical concretions (Glenn et al. 1994;Scasso & Castro, 1999;Arning et al. 2009a, Bradbury et al. 2015;Filek et al. 2021).Today, authigenic phosphate crusts, nodules and concretions precipitate in organic-rich sediments at shallow depth (Soudry, 2000;Arning et al. 2009a), deposited either below upwelling or oxygen minimum zones (Föllmi, 1996;Arning et al. 2008).The depletion of oxygen is a critical factor because it favours the preservation of organic matter, which is required as a phosphorus source during phosphogenesis (Filippelli, 2011).The Lower Alum Black Shale at Drewer exhibits two concretion types, which are interpreted as representing two different generations of growth (Fig. 5).The first generation is represented by spherical to oval phosphate aggregates, which are overgrown by a second generation that accreted a larger volume of phosphorite (Fig. 5c, d).The first generation of phosphatic concretions grew in organic-rich sediments corresponding to microfacies 1 and were possibly exposed by winnowing.After their exposure, the phosphatic concretions were apparently transported from their formation site into sediments corresponding to microfacies 2. The composition of first-generation concretions differs from microfacies 2, which also suggests that they did not grow in microfacies 2 sediments but were redeposited before the second generation of concretions formed.Different formation conditions for the two generations of concretions are further indicated by the absence of authigenic pyrite within the first-generation concretions, while pyrite is abundant within the second-generation concretions and in the microstromatolites.In contrast to the first-generation concretions, second-generation concretions incorporated more non-phosphate minerals including clay minerals during growth (Figs. 5,8,9).These non-phosphate minerals were most likely incorporated from microfacies 2, suggesting that the second-generation concretions grew in situ within this microfacies.In addition, the secondgeneration concretions exhibit several growth rims that suggest multiple phases of concretion growth within microfacies 2 sediments (Fig. 5b, d).The presence of pyrite dispersed within the second-generation concretion (Fig. 8d) suggests that conditions remained sulphidic within the sediment during the growth of the phosphatic concretions within the Lower Alum Black Shale at Drewer.
5.b. Phosphatic microstromatolites -Palaeoenvironment and palaeoecology
Three possible mechanisms for microbial phosphorus accumulation include the remineralization of organic matter by sulphatereducing bacteria, the release of internally stored (poly)phosphate by giant chemotrophic sulphide-oxidizing bacteria and the reductive dissolution of iron oxides that release adsorbed phosphorus (Froelich, 1988;Schulz & Schulz, 2005;Arning et al. 2009a;Berndmeyer et al. 2012).Microbial sulphate reduction is the quantitatively dominant anaerobic process in the remineralization of organic matter in modern continental margin sediments (Ferdelman et al. 1999;Brüchert et al. 2003).Sulphate reduction is considered as a main cause of liberating phosphorus from organic matter, enabling the formation of phosphorite crusts and nodules in modern upwelling sediments (Arning et al. 2009a(Arning et al. , 2009b)).The Lower Alum Black Shale would have provided much organic matter for sulphate-reducing bacteria to release dissolved phosphorus to sedimentary pore water.The distribution, abundance and morphology of authigenic pyrite are further indicators for sulphate reduction in ancient phosphate-rich deposits (Wilkin et al. 1996;Cosmidis et al. 2013;Rickard, 2021).Pyrite is abundant within the second-generation phosphatic concretions and within the microstromatolites (Figs.8a, c, d, 9f).The remineralization of organic matter and phosphorus release by sulphate reduction probably contributed substantially to phosphogenesis in the organic-rich sediments at Drewer, suggesting that phosphogenesis during early diagenesis was the primary process during growth of the second-generation concretions and microstromatolites.
Phosphorus-rich deposits in modern upwelling regions provide habitats for giant chemotrophic sulphide-oxidizing bacteria (Schulz & Schulz, 2005;Arning et al. 2008;Crosby & Bailey, 2012).The release of phosphate from internally stored (poly) phosphate compounds to pore waters has been demonstrated for
Chemotrophy-based phosphatic microstromatolites
Czaja et al. 2016); these chemotrophic bacteria are known to dwell in specific horizons at the redox boundary or in narrow zones at the sediment-water interface, contributing to the formation of phosphorite laminites and crusts on the seafloor (Jørgensen & Revsbach, 1983;Arning et al. 2008Arning et al. , 2009a)).Although evidence of the involvement of sulphide-oxidizing bacteria in phosphogenesis at Drewer is lacking, their possible contribution cannot be excluded.
An alternative microbial consortium that could have formed the phosphatic microstromatolites are microorganisms forming Frutexites, which are laminated, arborescent ferruginous or manganiferous microstromatolites that form in low-energy environments typified by low sedimentation rates and limited oxygen availability (Reitner et al. 1995;Woods & Baud, 2008;Lazar et al. 2013).Although it is not entirely clear which microbial consortia are involved in the formation of ancient Frutexites, iron enrichment appears to be a primary feature leading to the interpretation that iron-oxidizing bacteria in conjunction with either nitrate or sulphate reduction are key players in their development (Jakubowicz et al. 2014;Heim et al. 2017).Many Frutexites have been interpreted as stromatolites dwelling in deep-water or cryptic habitats that show no preferred phototactic growth direction (Böhm & Brachert, 1993;Gischler et al. 2021).Although the morphology and size of the Drewer microstromatolites are similar to reported Frutexites, the lack of iron oxides in the samples and the formation environment of the concretions argues against this interpretation (cf.Crosby et al. 2014).If bottom and pore waters were indeed anoxic to sulphidic at Drewer as suggested by trace element analyses (Siegmund et al. 2002;Becker et al. 2016) and by the presence of pyrite, an accumulative mechanism by the formation and subsequent dissolution of iron oxides would be difficult to achieve and attain, although this may depend on the intensity and expansion of reducing bottom water conditions (Dellwig et al. 2010).
5.c. Stable isotopes and a scenario of microstromatolite and concretion formation
The δ 13 C PAC values of the phosphatic microstromatolites between −3.5 and −2‰ (Fig. 11) probably do not record early-to mid- Tournaisian seawater dissolved inorganic carbon (DIC).Carbon isotope stratigraphy from the Devonian-Carboniferous boundary suggests high rates of organic carbon burial and drawdown of atmospheric carbon dioxide as evidenced by positive δ 13 C excursions across this interval from localities in Europe, China and the USA (Buggisch & Joachimski, 2006;Myrow et al. 2011;Kumpan et al. 2014;Kaiser et al. 2015;Qie et al. 2015).These excursions show a variety of δ 13 C peaks between þ2 and þ6‰ and are difficult to correlate globally as they differ between various regions and basins, which makes a reliable reconstruction of the carbon isotope composition of global ocean DIC difficult (Kaiser et al. 2015;Paschall et al. 2019;Barnes et al. 2020).A lack of positive δ 13 C excursions during the Middle and Late Devonian of the Drewer section was noted by Buggisch & Joachimski (2006), who attributed this lack to a stratigraphic gap where no carbonates were deposited.Kumpan et al. (2015) investigated the δ 13 C record at Drewer, yet concluded that diagenetic alteration had compromised any reliable δ 13 C record of seawater.However, a positive δ 13 C excursion was documented in the Hasselbachgraben section near Oberrödinghausen, which is in the vicinity of the Drewer locality (Kaiser et al. 2006).These δ 13 C excursions were assigned to the Hangenberg Shale Event and interpreted as a sea-level highstand at the peak of the Hangenberg crisis (Kaiser et al. 2006).Although chemostratigraphic data directly related to the Lower Alum Shale Event are sparse (e.g.Saltzman et al. 2004), Buggisch et al. (2008) reported a positive carbon isotope excursion during this event from coeval sections across Europe, reporting δ 13 C carbonate values as high as þ5‰.
The δ 13 C composition of seawater DIC during the Late Devonian Hangenberg Shale Event until and after the Lower Alum Shale Event was strongly influenced by positive δ 13 C excursions reflecting perturbations in the global carbon cycle.Moreover, carbonate δ 13 C values from the latest Devonian through to the mid-Tournaisian never dropped below 0‰, which is distinct from δ 13 C values determined for the phosphatic stromatolites and concretions at Drewer (cf.Fig. 11).The δ 13 C values of the Drewer phosphatic microstromatolites are more negative compared to the δ 13 C value of Early Carboniferous seawater DIC (Saltzman et al. 2004;Buggisch et al. 2008).Moreover, the δ 13 C values of phosphatic microstromatolites are also more negative than δ 13 C values of the phosphatic concretions and the host material of microfacies 2 (Fig. 11).Inorganic carbon incorporated by phosphate minerals during phosphogenesis was likely a mixture of seawater DIC and DIC that was formed during the anaerobic microbial remineralization of organic matter in the sediment.
The lower δ 13 C values of the Drewer microstromatolites compared to phosphatic concretions and host rock suggest that the metabolism of the mat-forming microorganisms may have been responsible for local fractionation of stable carbon isotopes within the microbial mats that formed the microstromatolites.All δ 13 C values are substantially lower than what would be expected by fractionation during Calvin cycle carbon fixation mediated by oxygenic phototrophic microbial communities (O 'Leary, 1988;Laws et al. 1995).Although a covariance of δ 13 C and δ 18 O values of microstromatolites (r 2 = 0.94) and the phosphatic concretions (r 2 = 0.97) may argue for some degree of diagenetic overprint with meteoric waters during burial diagenesis (cf.Brand & Veizer, 1981;Banner & Hanson, 1990;Heydari et al. 2001;Tong et al. 2016), the δ 13 C values of the microstromatolites are still more negative than those determined for the phosphorous concretions and the host rock, suggesting that the signal of microbial fractionation has been preserved to some degree.Moreover, environmental perturbations related to climatic warming may cause shifts of δ 13 C and δ 18 O to more negative values as observed for the Paleocene -Eocene Thermal Maximum and several oceanic anoxic events (Zachos et al. 2006;Ullmann et al. 2014).Therefore, the observed covariance between δ 13 C and δ 18 O values of the phosphatic concretions and microstromatolites may additionally be impacted by climatic warming, elevated organic matter burial and increasing seawater temperatures from the late Famennian to the early Tournaisian (cf.Kaiser et al. 2008).Based on the temperature dependence of oxygen isotope fractionation between carbonate minerals, carbonate-bearing apatite and water (O'Neil et al. 1969;Loeffler et al. 2019) and neglecting site-specific isotope fractionation (Zheng, 2016;Aufort et al. 2017), the observed maximum isotope variation (Fig. 11) would indicate a temperature rise by up to 8 and 11°C.This seems to be unrealistically high, therefore, at least parts of the oxygen isotope shift may be caused by a lowering in the oxygen isotope composition and therefore salinity of the pore water.Taken together, it is likely that the δ 13 C values of the phosphatic microstromatolites do not reflect ambient seawater composition, suggesting that the incorporated carbon within the carbonate fluorapatite reflects microbial fractionation related to a non-phototrophic metabolism.
The phosphatic concretions formed in anoxic sediments with the oxic-anoxic interface likely present in the water column in a regime of low water energy and low sedimentation rates.Some first-generation concretions are completely surrounded by the phosphatic microstromatolites (Figs. 5c,d,6a,c,d), with the concretions as a substrate for the growth of stromatolite-forming microbial mats.The growth direction of the microbial mats was highly variable, showing no unidirectional upward growth direction as would be expected for phototactic growth as observed for photosynthesis-based cyanobacterial stromatolites (Allwood et al. 2006(Allwood et al. , 2007)).Interestingly, chemotrophic microbial mats growing around carbonate concretions have also been observed at methane seeps in the Black Sea (Reitner et al. 2005).There, the mat-forming methanotrophic archaea and sulphate-reducing bacteria form thin layers around the concretions, showing a growth mode similar to the Drewer microstromatolites.It cannot be ruled out that some of the microstromatolites formed during reworking of the concretions at the sediment water interface, yet it is unlikely that the delicate branching stromatolites (Fig. 7) would have been preserved during physical reworking, sedimentation and burial of the concretions.The envisaged scenario of microstromatolite formation on phosphatic concretions is depicted schematically in Fig. 12.The first-generation concretions formed within deep-water anoxic, organic carbon-rich sediments during the global sea-level highstand of the Lower Alum Shale Event.The high content of organic matter and anoxic conditions allowed for the accumulation of phosphorus and the formation of phosphatic concretions (Fig. 12a).These first-generation concretions were subsequently moved from their original locus of formation, either by winnowing, bottom currents, or mass wasting and transported into a host sediment consisting of microfacies 2 sediments (Figs. 5,12b).The phosphatic concretions were then colonized within the soft and porous sediment by microbial mats, probably including sulphate-reducing bacteria.These bacterial mats used the firstgeneration concretions as templates for biofilm attachment, growing on all sides of the concretions by displacive growth, pushing the soft surrounding sediments outwards, away from the concretion (Fig. 12c).The stromatolite-forming microbial mats showed no preferred growth direction that would suggest phototactic growth, which is in line with the interpretation that Chemotrophy-based phosphatic microstromatolites the phosphatic concretions and the microstromatolites formed within the sediments.The metabolic activity of sulphatereducing bacteria gradually shifted the environmental conditions in the sediments from anoxic to sulphidic during a second phase of phosphatic concretion growth (Fig. 12d), as suggested by the occurrence of pyrite within the secondgeneration concretions (Fig. 8a, d).
Conclusions
The Lower Alum Shale Event is archived within black shales deposited at the base of the Carboniferous, cropping out in the Rhenish Massif at Drewer.These black shales are rich in phosphate minerals, specifically apatite, and contain abundant phosphatic concretions that grew in two generations under anoxic to sulphidic conditions within the sediments during a sea-level highstand.The phosphatic concretions also served as a substrate for phosphatic microstromatolites, which are present in various morphologies including branched and cauliflower-shaped types attached to the first-generation concretions, as well as individual aggregates present within the second-generation concretions.Carbon stable isotope analyses of PAC derived from the phosphatic microstromatolites reveal low δ 13 C values compared to dissolved inorganic carbon of Early Carboniferous seawater, possibly pointing to a chemotrophic metabolism of the matforming microorganisms.Together with the inferred environmental conditions during black shale deposition and the formation of phosphatic concretions, these results suggest that the stromatolite-forming microbial mats thrived in an aphotic environment.Stromatolites through Earth's history have mostly been interpreted as being the remnants of photosynthetic cyanobacteria, and therefore seen as indicators for shallow water environments.This study, however, shows that microbial communities forming stromatolites can also inhabit deep-water settings, and that the mat-forming microorganisms do not necessarily depend on photosynthesis as primary metabolic pathway.The reconstruction of environmental conditions and microbial metabolisms during stromatolite growth should therefore always regard chemotrophy as a feasible alternative to the common interpretation of stromatolites formed by cyanobacteria in shallow water settings within the photic zone.
Figure 1 .
Figure 1.(Colour online) Geologic overview of the Rhenish Massif east of the Rhine river with a focus on Upper Devonian to Lower Carboniferous strata and the location of the Drewer quarry (see red box); after Königshof et al. (2016).
Figure 2 .
Figure 2. (Colour online) Outcrop photograph and sedimentary log of the investigated outcrop in the Drewer quarry; person for scale.
Figure 3 .
Figure 3. (Colour online) Detailed lithostratigraphic log and an outcrop photograph showing the corresponding beds of the Lower Alum Black Shale at Drewer; hammer for scale.
Figure 4 .
Figure 4. (Colour online) Outcrop photographs of the Lower Alum Black Shale at Drewer.(a) Elongated, flat phosphatic concretions within the transition of beds 5f to 5g.(b) Bed 5h with larger, oval to spherical phosphatic concretions overlain by bed 5i and the Erdbach limestone; folding rule for scale.(c) Detail corresponding to the left red rectangle in (b) showing numerous oval phosphatic concretions within bed 5h; coin for scale.(d) Detail corresponding to the right red rectangle in (b) with spherical phosphatic concretions within laminated black shales from bed 5h; coin for scale.
Figure 5 .
Figure 5. (Colour online) Thin section scans and photomicrographs of phosphatic concretions from bed 5g of the Lower Alum Black Shale; mf = microfacies, 1 = first-generation concretion, 2 = second-generation concretion.(a) Thin section scan showing two microfacies of background sediment with numerous smaller spherical, oval and elongated concretions and a large concretion floating within microfacies 2. (b) Close-up view of the larger phosphorous concretion in (a) with the first-and second-generation concretions, whereby the latter completely surrounds the former.White arrows show growth rims in the second-generation concretion.(c) Thin section scan showing microfacies 1 and 2 with numerous smaller concretions and a large concretion floating within microfacies 2. (d) Detailed view of the large concretion in (c) showing the second-generation concretion not completely surrounding the first-generation concretion.Arrows denoting a vague laminated growth pattern in the outer concretion, and the red rectangle highlighting a large aggregate of columnar branching phosphatic microstromatolites within the second-generation concretion.
Figure 6 .
Figure 6.(Colour online) Photomicrographs of phosphatic microstromatolites; ps = phosphatic microstromatolite, ms = microcrystalline silica, cf = carbonate fluorapatite, py = pyrite.(a, b) Large aggregates of columnar branching microstromatolites floating within the second concretion with abundant microcrystalline silica between individual columns.(b) An aggregate of more bulbous, cauliflower-shaped microstromatolites with microcrystalline silica and pyrite (opaque minerals); arrow denoting a radiolarian test.(c) Small aggregates of microstromatolites floating within the phosphatic second-generation concretion, arrow denoting a smaller aggregate.(d) Close-up view of a microstromatolite aggregate exhibiting small-scale darker and lighter laminae; arrows denoting authigenic pyrite within the phosphatic microstromatolite.
Figure 7 .
Figure 7. (Colour online) Photomicrographs of columnar branched phosphatic microstromatolites attached to the first-generation concretion; 1 = firstgeneration concretion, 2 = second-generation concretion, cv = carbonate vein, cm = clay minerals, qtz = quartz grains.(a) Columnar microstromatolites attached to the first-generation concretion.(b) Columnar, branched microstromatolites attached to the firstgeneration concretion, arrow denoting pyrite aggregates.(c, d) Columnar microstromatolites among clay minerals from a carbonate vein on the surface of the first-generation concretion.(e, f) Close-up view of columnar microstromatolites showing fine alterations of darker and lighter laminae among dispersed clay minerals and pyrite.
Figure 8 .
Figure 8. (Colour online) Scanning electron microscopy images of phosphatic microstromatolites; 1 = first-generation concretion, 2 = second-generation concretion, ps = phosphatic microstromatolite, ms = microcrystalline silica.(a) Bulbous microstromatolite projecting from the first-generation concretion outwards and showing thin dark laminae of mostly alumosilicates, as well as finely dispersed pyrite (white reflecting minerals).(b) Columnar microstromatolite attached to the first-generation concretion with more intense, frequent interlayering of alumosilicates.(c) Bulbous to cauliflower-shaped microstromatolite with little lamination; red rectangles denoting authigenic pyrite, arrows denoting radiolarian tests.(d) Apparently free-floating microstromatolite aggregate within the second-generation concretion among abundant microcrystalline silica; red rectangles highlighting larger pyrite aggregates within the microstromatolite and the second-generation concretion.
Figure 10 .
Figure 10.Fourier-transform infrared (FTIR) attenuated total reflectance (ATR) spectra.Spectra of (1) firstgeneration concretion, (2) second-generation concretion, and (3) phosphatic microstromatolite, as well as blackboard chalk, carbonate-bearing fluorapatite (RRUFF database; Downs, 2006; Lafuente et al. 2015), carbonate-free fluorapatite from Durango, Mexico (Becker et al. 2016); broken lines and shaded areas indicate the positions and areas of certain vibrations of the anion groups.The two double arrows at ca 1,425 and 1,450 cm -1 indicate the anti-symmetric stretching ν 3 modes of B-type carbonate groups in apatite.Spectra have been vertically normalized and offset for better visibility.See text for details.
Figure 11 .
Figure 11.(Colour online) Cross plot showing carbon and oxygen stable isotope compositions of phosphate-associated carbonate (PAC) of the phosphatic microstromatolites, phosphatic concretions and the phosphatic microfacies 2 of the host rock.
Figure 12 .
Figure 12. (Colour online) Schematic cartoon illustrating the formation of phosphatic concretions and phosphatic microstromatolites at Drewer, SWI = sediment-water interface.(a) Autochthonous formation of first-generation concretions in anoxic, organic-rich sediments.(b) Transport and redeposition of first-generation concretions.(c) Growth of phosphatic microstromatolites on first-generation concretions within sulphidic sediments.(d) Formation of second-generation concretions around first-generation concretions during sulphidic conditions in the sediment.
(Kaiser et al. 2011;Becker et al. 2021)ring the Mississippian(Kaiser et al. 2011;Becker et al. 2021).The Lower Alum Shale Event is also represented by black shales and cherts in the Holy Cross Mountains in Poland and from the Lydiennes Formation in the (Siegmund et al. 2002it suggests that the prolonged sea-level highstand with eutrophication in the photic zone (cf.Siegmund et al. 2002)favoured phosphogenesis at Drewer.The scarcity or the lack of bioturbation in the black shales at Drewer(Siegmund et al. 2002; own observation) agrees with episodically anoxic bottom water conditions. | 2023-08-27T15:31:43.142Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "2663adb3fc2247eef3ae67a8e47a851d31deaa90",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/69379565071407774593C93C709E68F4/S0016756823000493a.pdf/div-class-title-chemotrophy-based-phosphatic-microstromatolites-from-the-mississippian-at-drewer-rhenish-massif-germany-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "c6d8d91e89b00cd054981996e449abbafc9bb713",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
229472338 | pes2o/s2orc | v3-fos-license | Fulfilled Expectations about Leaders Predict Engagement through LMX
Romania Abstract Drawing on the bandwidth-fidelity principle (Cronbach & Glaser, 1957), this paper challenges the use of broad Implicit Leadership Theories (ILTs) domains in predicting organizational outcomes (i.e., prototypic ILTs and anti-prototypic ILTs) and provides preliminary arguments for examining ILTs narrow traits (e.g., sensitivity, intelligence) effects on LMX and consequently on work engagement. Specifically, using polynomial regression and response surface methodology, I examined the effects of followers’ ideal-actual ILTs congruence on LMX. Additionally, using the block variable approach, I tested the mediation effects of LMX on the relationship between ideal-actual ILTs congruence and work engagement, on a sample of 68 employees. The results showed that followers’ fulfilled expectations about sensitivity and tyranny had linear effects on LMX, indicating the generalized benefits for leaders to be high on sensitivity and low on tyranny to enhance followers’ LMX. Intelligence, dedication, dynamism, and masculinity had non-linear effects, revealing that fulfilling followers’ expectations are the best option for leaders to develop high-quality relationships with their followers. The mediation hypothesis received partly support, suggesting that additional mechanisms can explain the relationship between followers’ ideal-actual ILTs congruence and work
The human mind is hardwired to make sense of the world. To cope with the complexities of our lives, we rely on simplifying cognitive mechanisms, such as conceptual categories or mental models to map and navigate the world (Fiske & Taylor, 1991). Implicit Leadership Theories (ILTs) are an example of such mental models that incorporate desired attributes of leaders in professional settings (Eden & Leviatan, 1975;Lord, Foti, & de Vader, 1984;Lord & Maher, 1991). Their practical utility stems from their role during leader-follower interactions when they are used by their holders as benchmarks to predict and interpret leaders' behaviors and attitudes and to respond in an adaptive manner (Lord & Maher, 1991).
Correspondence for this article should be addressed to Andreea A. Petruș, Psychology Department, Faculty of Psychology and Educational Studies, University of Bucharest,Panduri Street,90,Bucharest,Romania, ILTs have been proven to have considerable significance in predicting employees' organizational attitudes and their performance (e.g., Ayman & Chemers, 1983;Biermeier-Hanson & Coyle, 2019;Epitropaki & Martin, 2005;Junker, Schyns, van Dick, & Scheurer, 2011;Khorakian & Sharifirad, 2018;Riggs & Porter, 2016). Building upon initial theoretical assumptions, most of the research conducted on ILTs has focused on how ILTs impact various organizational outcomes through the relationship between leaders and followers (leader-member exchange, LMX; Junker & van Dick, 2014). Specifically, when leaders live up to their followers' expectations, there is a high likelihood for followers to have positive affective reactions towards their leaders and to develop high-quality relationships with them, whereas when leaders fall short of their followers' expectations, followers tend to develop negative affective responses and low-quality relationship with their leaders (Lord & Maher, 1991). Consequently, followers behave in a manner aligned with their feelings and the perceived quality of the dyadic relationship which eventually will lead to different outcomes, such as counterproductive work behaviors (CWB) in case of low LMX (Biermeier-Hanson & Coyle, 2019) or organizational commitment in case of high LMX (Epitropaki & Martin, 2005).
Many authors have tried to determine the content of ILTs. The best empirically tested and most extensively used factor structure is the one developed by Offerman, Kennedy, and Wirtz (1994) and revised by Epitropaki and Martin (2004). It consists of 21 attributes of leaders, grouped into 4 prototypic or positive factors (Sensitivity, Intelligence, Dedication, and Dynamism) and 2 anti-prototypic or negative factors (Tyranny and Masculinity). Lately, a growing body of research has tested the impact of the congruence between followers' preferences regarding ILTs traits of ideal leaders and the recognition of those ILTs traits in their actual leaders on various outcomes, such as perceived leadership, work attitudes, turnover intentions, performance or development (e.g., Rahn, Jawahar, Scrimpshire, & Stone, 2016;Riggs & Porter, 2017;Rupprecht, Kueny, Shoss, & Metzger, 2016;Wang & Peng, 2016). With the exception of the study conducted by Rupprecht and her colleagues (2016), which focused on the impact of a single ILTs trait, namely Sensitivity, on CWB, all the other empirical studies tested the impact of the broad dimensions of ILTs traits, either prototypical or anti-prototypical, on organizational outcomes. The two broad ILTs dimensions comprise subsets of related (i.e., highly correlated), yet distinct traits. While their combined effects have proven to have predictive utility, their criterion validity can be maximized when they work separately. Some positive ILTs traits, like Sensitivity, might be more important for affective loaded outcomes, such as job attitudes, whereas others, such as Intelligence, might be more important for performance outcomes. The concept of bandwidth fidelity (Cronbach & Gleser, 1957;Salgado, 2017) indicates that there should be compatibility between the nature and breadth of the predictor and those of the outcome variable. In the domain of personality literature, using narrow personality measures (i.e., facets) instead of broad dimensions, not only that narrow criteria could be better predicted, but narrow personality measures explained supplementary variance of broad outcomes over broad dimensions (Ashton, 1998;Jenkins & Griffith, 2004;Tett & Burnett, 2003). Despite the theoretical provision of the bandwidth fidelity framework, no research has empirically tested it for narrow ILTs traits. Given the heterogeneous content of ILTs, it might have practical relevance to explore their effects individually, not on a global level. Therefore, the main purpose of this study was to address this gap and explore whether the congruence between employees' narrow ILTs traits and recognition of those traits in their actual leaders had different associations with LMX, which in turn, had different implications for engagement.
This study contributes to the socialcognitive perspective of leadership literature by examining how congruence between each ILTs trait and recognition of that trait in leaders impact LMX and engagement in a nuanced manner, by using polynomial regression analysis and graphing the threedimensional response surface generated for the combination of two predictor variables, namely ideal ILTs trait and recognition of that trait in leaders, and follower-rated LMX. Additionally, this study challenges the conventional expectations that all positive ILTs traits always have a positive impact on LMX, by showing that even inherently good attributes might have negative consequences on LMX when they are above holders' preference. Furthermore, except for the study conducted by Epitropaki and Martin (2005), no empirical research has addressed the relationship between anti-prototypical ILTs traits and LMX or other outcome variables.
Theoretical background and hypotheses
Implicit Leadership Theories: a brief overview ILTs are focal concepts of the leadership categorization theory (Eden & Leviatan, 1975;Lord, Foti, & de Vader, 1984). The central assumption of this theory is that people form and hold in their long-term memory mental models of leaders which they use as benchmarks to automatically judge organizational actors and make spontaneous decisions if they are (ideal) leaders or not. ILTs are structured in memory from early childhood, during socialization with authority figures, such as parents and teachers (Keller, 1999;Keller, 2003) and restructured continuously in an adaptive manner to integrate new experiences with leaders (Shondrick & Lord, 2010). According to Lord and Maher (1991), ILTs are encoded in a hierarchical structure that includes attributes for various types of leaders in different contexts. As such, ILTs contain three different levels of abstraction: a superordinate level, where the most abstract attributions that differentiate leaders from non-leaders are held (e.g., domineering versus compliant), a basic level, where representations contain information about leaders in specific contexts (e.g., business leaders versus political leaders) and a subordinate level, where more situational and exclusive attributes about leaders are encoded (e.g., top-level versus middle-level business leaders). During interactions with others in professional settings, people use their hierarchically structured ILTs attributes to compare the target person with a category of leaders. Once the match is produced, ILTs holder labels the other person according to the category and assigns him or her all the other attributes of that specific category, irrespective if they are characteristics or not of the target person. Despite the ILTs structure developed by Offerman, Kennedy, and Wirtz (1994) and revised by Epitropaki and Martin (2004) was the most frequently used in business settings, according to the systematic review conducted by Junker and van Dick (2014), researchers have conceptualized it differently, either as a set of attributes of ideal leaders (i.e., exceptionally positive leaders) or as a set of attributes of prototypic leaders (i.e., average leaders). The study conducted by Van Quaquebeke, Graf and Eckloff (2014) showed that the two conceptualizations had considerable overlap, but only the ideal one was predictive for affective commitment towards the leader, respect for the leader, satisfaction with leadership, LMX and intention to leave. Therefore, for the purpose of this study, the ideal conceptualization of ILTs was used.
Even though prior studies have investigated ideal-actual ILTs congruence at a higher level of aggregation, by linking the cumulative effect of either positive ILTs congruence or negative ILTs congruence on LMX, of practical relevance is the congruence at the level of narrow ILTs traits. One thing supporting this view is the fact that people endorse different ILTs traits in specific contexts. For example, in educational settings, leaders' capacities to build positive relationships with the students and teachers and their ability to develop an effective curriculum are key drivers to academic achievement (Hallinger, 2001;Robinson, Lloyd, & Rowe, 2008). These two aspects can translate into sensitivity and intelligence, two positive traits that add to the positive ILTs aggregate score. On the other hand, sensitivity can fall behind in other types of settings, such as the military one, where dominance takes precedence (Rueb, Erskine, & Foti, 2008). Thus, investigating the consequences of each ILTs trait may be more informative both from a theoretical and practical point of view.
Ideal-actual ILTs congruence
When ILTs were used in applied settings to determine their impact on various organizational outcomes, researchers measured them either directly, by asking participants about the degree to which their leaders possess specific ILTs traits (e.g., Khorakian & Sharifirad, 2019) or indirectly, by measuring two sets of ILTs traits, one representing participants' expectations from ideal leaders and a parallel one, assessing recognition of those ILTs traits in their actual leaders (e.g., Biermeier-Hanson & Coyle, 2019). In the second case, researchers computed a congruence score, underpinning the ideal-actual match, which they used to predict various outcomes. In most of the studies, congruence scores were computed as difference scores, either absolute or squared difference, but Edwards (2002) encouraged the use of polynomial regression instead. The most important advantage of the polynomial regression is its potential to extract more practical information, such as the differential impact of the direction of the congruence (i.e., ideal > actual or ideal > actual) or of the degree of congruence (i.e., congruence at high levels or congruence at low levels). Based on Edwards' recommendations and given that recent studies have started to utilize polynomial regression, for this study, I measured two sets of scores (i.e., preferences for ideal leaders' ILTs and recognition of those ILTs in actual leaders) and use them to deploy regression analysis with response surface.
Ideal-actual ILTs congruence and LMX
LMX represents another significant leadership framework that emphasizes the dyadic relationship between leaders and followers (Gerstner & Day, 1997;Graen & Uhl-Bien, 1995). Drawing on the principles of social exchange theory (Blau, 1964), the core assumption of LMX stipulates that leaders and followers alike develop mutual relationships that differ in quality, depending on the bidirectional exchanges between partners. High-quality relationships are characterized by mutual trust, respect, and exchanges that go beyond regular job requirements, whereas lowquality relationships are based on reciprocal exchanges that are limited to formal job requirements (Graen & Uhl-Bien, 1995).
Previous studies have proven the impact of ideal-actual ILTs congruence on LMX (e.g., Epitropaki & Martin 2005;Rupprecht, Kueny, & Shoss, 2016). As mentioned previously, ILTs have an important role in guiding employees' perceptions and making attributions about their leaders, both perceptions and attribution modulating the dynamic of the leader-follower relationship. When followers' positive perceptions of actual leaders' behaviors match their expectations, an automatic recognition process is generated (Lord & Maher, 1991). This process predisposes followers to make positive initial impressions about their leaders, which, in turn, color subsequent perceptions, following a perception-behavior sequence: initial positive judgments bias followers to behave in a desirable way during interactions with leaders. These behaviors attract positive reactions from leaders, which sequentially reinforce the initial positive perceptions. Thus, leaders are perceived to be trustful and relationships are perceived as highly qualitative. Moreover, followers' desirable behaviors and attitudes stimulate equivalent behaviors and attitudes from leaders, such as providing additional attention, support, and resources. It is a mutual influence process that feeds back to the followers' perception of a high-quality LMX with leaders (Lord & Maher, 1991). Edward and Cable (2009) tested a conceptual theoretical model with 4 explanatory mechanisms that linked value congruence (i.e., employees' perceptions that the organization shares their values) to organizational outcomes. The mechanisms were: enhanced communication, predictability, interpersonal attraction, and trust. The same underlying mechanisms can explain the link between ILTs congruence and LMX, given that both the leader and the organization are contextual elements and operate in a similar fashion in the relationship with the employees. Another explanatory mechanism linking positive ILTs congruence to LMX is the Pygmalion effect (Rosenthal, 1993). Whiteley, Sy and Johnson (2012) proved that fulfilled positive expectations about the dyadic partner give rise to a "naturally occurring Pygmalion effect" (p. 822), a self-fulfilling prophecy which creates a propensity for the holders of the expectations to make other positive inferences about the dyadic partner, which eventually impacts LMX positively following the abovementioned perception-behavior sequence. In addition to test whether fulfilled positive expectations predict high LMX, results of previous studies suggest that the level at which fulfillment is achieved matters too (e.g., Rupprecht et al., 2016). Having a high need satisfied brings more benefits than having a moderate or even low one. In the first case a significant positive affective reaction can be triggered, whereas, in the second, the affective effects might be negligible. Because there is no specific information in the literature on how each ILTs trait relates to LMX, but relying on the results of previous research that revealed a positive association between the cumulative effect of all positive ILTs traits and LMX, I hypothesized the following: Hypothesis 1: Followers' intra-personal congruence at higher levels of positive ILTs traits will be associated with higher ratings of follower-rated LMX, as compared at lower levels of positive ILTs traits. This hypothesis was tested separately for each positive ILTs trait, as follows: H1a -Sensitivity, H1b -Intelligence, H1c -Dedication and H1d -Dynamism. (Fulfilled positive expectations hypothesis) When positive expectations are not fulfilled, low-quality LMX is developed, in which dyadic exchanges are within the limits of formal roles. Based on the needs-supplies fit theoretical assumption formulated by Edwards, Caplan and Harrison (1998) who asserted that both under and oversupply can be detrimental, it was expected that there was an optimum level of positive ILTs traits manifested by leaders for LMX to be maximized. Getting even more from leaders than what they expected, might have been tricky for followers because receiving more of a kind impede other job-related needs to be satisfied (Edwards et al, 1998). Another explanation was offered by Harris and Kacmar's study (2006) which revealed, contrary to the obvious intuition, that having a high LMX with their leaders led to a higher level of stress for followers, because of the high obligations felt by followers to reciprocate for the advantages obtained from their leaders. Nevertheless, not receiving enough when the requirement for a specific ILTs trait is high can be more damaging than getting more of a good thing because the underlying unfulfilled need is felt more intense and urgent. The idea is captured in the loss aversion concept introduced by Kahneman and Tversky (1979). They stated that the suffering of losing is felt extremely powerful so that people take risks to avoid it. Given all this, I proposed the following hypothesis: Hypothesis 2: When the direction of the followers' intra-personal incongruence is such that the scores of the followers' ideal positive ILTs traits (i.e., preferences) are below the scores of actual positive ILTs traits of their leaders (i.e., recognition), the level of follower-rated LMX will be lower, as compared to the situation when ideal positive ILTs traits are above the scores of actual positive ILTs traits of the leaders. This hypothesis was tested separately for each positive ILTs trait, as follows: H2a -Sensitivity, H2b -Intelligence, H2c -Dedication and H2d -Dynamism. (Direction of unfulfilled positive expectations hypothesis) Regarding the negative ILTs traits, to the best of my knowledge, only one previous empirical study investigated the relationship between ideal-actual congruence and LMX and it revealed no significant association between them (Epitropaki & Martin, 2005). Nevertheless, the mentioned study used absolute difference scores to approximate the congruence and therefore the results might have been hampered by the methodological problems associated with difference scores (Edwards & Parry, 1993). Moreover, the authors used the broad negative dimension which encompasses two ILTs traits, specifically Tyranny and Masculinity. In case the ideal-actual congruence scores for the two traits had different associations with LMX, their aggregation might end up canceling each other out. While ideal-actual congruence for positive ILTs promotes better LMX, it is expected that, on the flip side, ideal-actual congruence for negative ILTs to hinder LMX. Leung and Sy (2018) found that when negative Implicit Followership Theories (i.e., attributes of ideal followers) were fulfilled at a group level, Golem effect, a dark self-fulfilling process, was triggered, having negative effects on performance. Therefore, the following hypotheses were proposed, paralleling the hypothesis suggested for the positive ILTs, but making the necessary logical changes for the dark side of ILTs traits: Hypothesis 3: Followers' intra-personal congruence at lower levels of negative ILTs traits will be associated with higher ratings of follower-rated LMX, as compared at higher levels of negative ILTs traits. This hypothesis was tested separately for each negative ILTs trait, as follows: H4e -Tyranny and H4f -Masculinity. (Fulfilled negative expectations hypothesis) Hypothesis 4: When the direction of the followers' intra-personal incongruence is such that the scores of the followers' ideal negative ILTs (i.e., preferences) are above the scores of actual negative ILTs of their leaders (i.e., recognition), the level of follower-rated LMX will be higher, as compared to the situation when ideal negative ILTs traits are below the scores of actual negative ILTs traits of the leaders. This hypothesis was tested separately for each negative ILTs trait, as follows: H5e -Tyranny and H6f -Masculinity (Direction of unfulfilled negative expectations hypothesis)
LMX as a mediator between ILTs congruence and engagement
Work engagement is a positive affective and highly motivational state that can be experienced by employees who perceive that their job resources are plentiful for handling their demands (Bakker & Demerouti, 2014). High-quality LMX with leaders has been proven to lead to the perception of a resourceful work environment because it comes with enriched jobs, empowerment, and social support for followers (Breevaart, Bakker, Demerouti, & van den Heuvel, 2015). Huel and his colleagues (2017) found a metaanalytical moderate association between LMX and engagement. Epitropaki and Martin (2004) showed that LMX mediated the relationship between ILTs congruence and well-being. Additionally, consistent with leader categorization theory (Lord & Maher, 1991) that asserts that once a person is labeled as a good leader many direct and indirect effects on organizational outcomes are triggered and also the abundance of prior research that supports that the effect of fulfilled expectations about leaders impacts various outcomes through LMX (Junker and van Dick, 2014), it is further expected to find an indirect effect of ideal-actual ILTs congruence on engagement through LMX. By contrast, followers perceiving low-quality LMX with their leaders can feel deprived of some resources such as leaders' support and are more strongly constrained to formal job tasks, so that they may not be as motivated and engaged as their colleagues in high-quality relationships with leaders. Consequently: Hypothesis 5: Followers' intra-personal congruence between ideal and actual ILTs has an indirect effect on engagement through LMX. This hypothesis was tested separately for each positive ILTs trait, as follows: H5a -Sensitivity, H5b -Intelligence, H5c -Dedication, H5d -Dynamism, H5e -Tyranny and H5f -Masculinity.
Method Participants and procedure
Participants were recruited through snowball sampling. The sample included 68 working adults who were willing to participate voluntarily in the study. Their ages ranged from 22 to 55 years old (M = 35.04, SD = 7.78). Male respondents accounted for 27% of the sample. About their educational level, 8.8% graduated high school, 42.6% had undergraduate studies, 36.8% graduate studies and 11.8% had postgraduate education. In terms of tenure, 8.8% had between one and three years of work experience, 14.7% between three to five, 25% between 6 to 10, 35.3% between 10 to 20 and 16.2% more than 20 years of experience. Regarding their leadership experience, 60.2%, 20.6% had less than three years of leadership experience, 7.4% between three and five, 4.4% between 5 to 10 and 7.4% more than 10 years of leadership experience.
LMX was measured with the 7 items leader-member exchange scale developed by Graen and Uhl-Bien (1995). On a 5-point scale, participants were asked to rate the quality of their relationship with the leader. Sample items include: "How well does your manager understand your job problems and needs?" and "I know where I stand with my manager." Work engagement was measured with the 9 items scale included in the Job Demands-Resources Questionnaire developed by Baker (2014). Participants were asked to rate how characteristic each of the affirmations was characteristic for them. Each item was rated on a 7-point scale, with response ranging from never to always. Sample items include: "At my work, I feel bursting with energy" and "I am proud of the work that I do".
Analytical strategy
Polynomial regression analysis with surface modeling (Edwards & Parry, 1993) was used to test the hypotheses. Most of the research addressing ILTs congruence has used the difference scores (e.g., Coyle & Foti, 2014;Epitropaki & Martin, 2005). This methodological approach was criticized for having numerous disadvantages such as the fact that it reduces a three-dimensional relationship to a two-dimensional one and that meaningful congruence hypotheses cannot be tested with difference scores (Edwards, 2002;Edwards, 2007). Polynomial regression is a more robust and informative analytical tool because it allows not only to test the extent the which congruence between two variables is related to an outcome, but also how the direction of the (in)congruence (i.e., Ideal ILTs trait > Actual ILTs trait or Ideal ILTs trait < Actual ILTs trait) and the level of congruence (i.e., when both ideal ILTs trait and actual ILTs trait are high or both are low) are related to the outcome (Rupprecht, Reynolds Kueny, and Shoss, 2016;Shanock, Baran, Gentry, Pattison, and Heggestad, 2010). As an example, for predicting LMX from the congruence between ideal Sensitivity and actual Sensitivity recognized in leaders, one of the positive ILTs traits, the regression equation was the following: LMX = b0 + b1*Sensitivity_I + b2*Sensitivity_A + b11*Sensitivity_I*Sensitivity_A + b22*Sensitivity_I 2 + b12*Sensitivity_A 2 + e, where b is the regression coefficient for each variable, I stands for the ideal Sensitivity (i.e., preference), and A stands for actual Sensitivity (i.e., recognition of Sensitivity in actual leader).
Prior to testing the models, scores for ideal and actual ILTs were centered to their midpoints, by subtracting 5 from each score, because both ideal ILTs and actual ILTs were measured on a 9-point Likert scale. This procedure was recommended because it reduces multicollinearity and facilitates the interpretation of the results (Aiken & West, 1991;Edwards & Parry, 1993). Thus, the coefficients for ideal ILTs traits and actual ILTs traits represent the slope of the surface at the center of the X-Y plane, namely the plane defined by the ideal ILTs traits and actual ILTs traits. For each trait, I computed three new variables necessary for the quadratic equation, namely: the square of the centered ideal ILTs trait, the square of the centered actual ILTs trait and the product between the centered ideal ILTs trait and centered actual ILTs trait. In total, 6 quadratic regressions were run for all ILTs traits. Based on the coefficients from the quadratic equation, the response surface pattern was determined for each combination of variables. Subsequently, I deployed polynomial regressions in SPSS for each of the ILTs traits, regressing LMX on the centered predictor variables, the squares of their centered values and the product of their centered values.
Using polynomial regression coefficients, I computed slopes and curvatures along the line of congruence and line of incongruence for each equation, using the Excel spreadsheet built by Shanock and her colleagues (2010). These parameters provided information about the shape of the surface, whether it was convex, concave or a saddle-shaped surface, which gave information about the overall relationship between variables. The line of congruence represents the line of the perfect fit, where the ideal ILTs trait score is equal to the actual ILTs trait score (e.g., Sensitivity_I = Sensitivity_A). The slope along the line of congruence gives indications on how the congruence predicts the level of outcome (i.e., the height of the outcome), whereas the curvature reveals if the relationship between the congruence and the outcome is linear or nonlinear. The line of incongruence is perpendicular to the line of congruence and reflects the perfect misfit, where ideal ILTs trait score equals minus actual ILTs trait score (e.g., Sensitivity_I = -Sensitivity_A). The slope along the line of incongruence shows whether the direction of the misfit (i.e., Senzitivity_I > Sensitivity_A or vice versa) produces an effect on the level of outcome. A significant curvature along the line of incongruence indicates how the direction of the misfit affects the outcome. A negative curvature means that the outcome is more sharply reduced as the misfit between the ideal and actual ILTs trait increases.
Consequently, I used the same polynomial regression coefficients to plot the threedimensional response surfaces for each set of three variables, namely ideal ILTs trait, actual ILTs trait depicted in the horizontal plane and LMX depicted on the vertical axis. For that purpose, I used Origin Pro 2020 software.
For testing the mediation hypotheses, I used Edwards and Cable's (2009) block variable method. First, for each ILTs trait, I computed a block variable, a weighted linear composite consisting of the joint effects of the five quadratic terms (e.g., for Sensitivity: Sensitivity _I, Sensitivity _A, Sensitivity _I_squared, Sensitivity _I X Sensitivity _A, Sensitivity _A_squared), in which the weights were the standardized regression coefficients in the polynomial regression. Then I used Hayes' PROCESS macro for SPSS (2018) to assess the indirect effect for each block variable to Engagement via LMX. Table 1 presents the means, standard deviations, internal consistencies, and correlations between study variables. Additionally, the table includes correlations with several control variables (i.e., demographics) to have a more comprehensive understanding of the data, but they were not included in the subsequent analysis because there was no theoretical argument to do so. Using Gignac and Szodorai (2016) criteria for assessing the magnitude of the correlations, ILTs traits had moderate to large correlations with LMX and engagement, which was according to the expectations. Table 2 presents both the first-order models with ideal ILTs traits and actual ILTs traits as predictors and the second-order models which additionally includes secondorder components, as specified in the quadratic equation above. As can be seen in the table, second-order models showed increased effects sizes compared to the first ones, indicating that exploring not only ideal ILTs traits and actual ILTs traits, but their simultaneous effect on LMX had practical value.
Results
Based on the response surface results presented in Table 2 and graphs depicted in Figure 1, I examined how (in)congruence between ideal positive ILTs traits and actual positive ILTs traits, their degrees and their directions related to LMX. For Sensitivity, the surface analysis revealed a significant positive slope along the line of congruence (.27**). This indicates that when ideal Sensitivity and actual Sensitivity were congruent, LMX increased as both increased. In Figure 1a, the highest level of LMX was reached at the right corner of the graph, where both ideal Sensitivity and actual Sensitivity were high. The curvature along the line of congruence was insignificant (.05), which meant that the relationship between variables was linear. These results were in support of H1a. The slope along the line of incongruence was negative and significant (-.35*), which meant that LMX was lower when the incongruence was such that the level of ideal Sensitivity was above the level of actual Sensitivity. Indeed, the graph depicted in Figure 1 shows that LMX decreased toward the front corner of the graph, as ideal Sensitivity increased, and actual Sensitivity decreased.
The curvature along the line of incongruence was negative and insignificant (-.01), which indicated a linear relationship. Thus, H2a was supported. For Intelligence, the surface analysis showed an insignificant positive slope (.30) and an insignificant positive curvature (.29) along the line of congruence. Thus, H1b did not receive support. Nevertheless, the values of the parameters were moderate, which meant that in the case of a higher power, it could have been significant. Indeed, as seen in the response surface graph presented in Figure 1, the relationship between the three variables generated a convex surface. In the case the results would have been significant, they could have been interpreted as following: LMX was higher when ideal and actual Intelligence were congruent at lower levels and at higher levels (right and left corner of the figure) and that LMX was lower when the two predictors were congruent at middle levels. With respect to the line of incongruence, the results revealed a significant negative slope. This meant that LMX was lower when the incongruence was such that actual Intelligence was below ideal Intelligence, compared to when actual Intelligence was above ideal Intelligence. Thus, there was support for H2b. The curvature along the line of incongruence was negative but insignificant, indicating a linear relationship between variables along the line of incongruence. Regarding Dedication, the response surface analysis revealed an insignificant positive slope (.06) and an insignificant positive curvature (.07) along the line of congruence. Thus, H1c did not receive support. Nevertheless, visual inspection of the graph depicted in Figure 1 indicated a convex response surface and therefore a tendency for LMX to increase as the congruence between ideal and actual Dedication increased. Additionally, the results showed an insignificant negative slope (-.34) and an insignificant negative curvature (-.15) along the line of incongruence. Therefore, H2c was not supported. However, the magnitude of the slope along the line of incongruence was moderate. Corroborating this information with the negative value reached for the curvature along the line of incongruence, meant that the relationship between variables had a concave shape along the line of incongruence. If statistical significance would have been achieved, we could have interpreted as following: LMX decreased more sharply as the level of incongruence between ideal Dedication and actual Dedication increased and reached its minimum level when ideal Dedication was above actual Dedication. Indeed, the same conclusion can be drawn by visually inspecting the 3D graph in Figure 1, where the lowest level for LMX is achieved in the front corner of the graph, where ideal Dedication is high and actual Dedication is low. For Dynamism, the results showed a significant positive slope (.18**) and a significant positive curvature (.22**) along the line of congruence. These results indicate that LMX increased in a non-linear manner, when both ideal Dynamism and actual Dynamism were congruent either at higher levels or at lower levels, but not at average levels. Thus, H1d was not supported since the relationship was not linear. Visual inspection of the graph depicted in Figure 1 reveals higher levels of LMX in the left corner, where both ideal Dynamism and actual Dynamism were at their minimum. The slope along the line of incongruence was negative and insignificant (-.22) and the curvature was positive and insignificant (.12), which meant that H2d was not supported. However, the rather moderate value of slope and the visual information revealed in Figure 1 indicated a tendency for LMX to decrease as the incongruence increased, reaching a minimum when ideal Dynamism was low and actual Dynamism was high. In the case of Tyranny, the response surface results showed a negative significant slope (-.14*) and a null curvature along the line of congruence. These indicated a linear relationship between the variables in the sense that LMX decreased as both ideal and actual Tyranny increased simultaneously. In Figure 1, the lowest level of LMX along the line of congruence is observed in the right corner, where both ideal and actual Tyranny reached their maximum levels. Thus, H3e was supported. The slope along the line of incongruence was negative and significant (-.18*), revealing that LMX was lower when the direction of the incongruence was such that ideal Tyranny was below actual Tyranny. The same conclusion is revealed by inspecting the graph depicted in Figure 1, where the minimum value for LMX along the line of incongruence was achieved in the back corner of the graph where ideal Tyranny was above actual Tyranny. The curvature along the line of incongruence was positive and insignificant (.01), revealing a linear relationship between variables. Thus, H4e received support. Regarding Masculinity, the response surface analysis showed a significant negative slope along the line of congruence (-.09*) and an insignificant positive curvature (.02). These results revealed that LMX was higher when both ideal and actual Masculinity were higher. Thus, H2f did not receive support. Visual inspection of the graph in Figure 1 revealed rather a saddle-shaped response surface, indicating a non-linear relationship between the variables. That indicated a tendency for LMX to increase when ideal and actual Masculinity tended to increase or decrease simultaneously. The slope along the line of incongruence was positive but insignificant (.07) and the curvature along the line of incongruence was negative but insignificant (-.04). Therefore, H4f was not supported. Nevertheless, the graph depicted in Figure 1 revealed a concave surface along the line of incongruence, indicating a tendency for LMX to decrease as the incongruence between ideal and actual Masculinity increased. Table 3 presents the summary of the results. X Hypothesis did not received support (statistical significance was not reached), but the tendency was revealed in the graphical representation of the response surface The results generated for 10,000 bootstrapped samples, by the mediation analysis deployed in SPSS, are presented in Table 4. The only mediation hypothesis which received support was H5a, revealing that the effect of Sensitivity block variable is transferred to engagement partly through LMX (.30* for the indirect effect, .09 for the direct effect and .39** for the total effect) . Intelligence, Dedication, Tyranny and Masculinity block variables had indirect effects on engagement, but their total effects were insignificant, whereas Dynamism had only a direct effect on engagement.
Discussions
In this study, guided by the bandwidth-fidelity principle, I investigated the relationships between each set of ideal-actual ILTs traits and LMX and subsequently, their indirect effect on work engagement. I used polynomial regression with response surface for testing the relationship between ideal-actual congruence and LMX and block variable approach for testing the mediation hypotheses. The results revealed that among the four positive ILTs traits, only sensitivity seems to be inherently good, as both congruence and incongruence hypotheses were supported. This means that even when the perceived sensitivity of leaders is above the expected level, followers perceive higher LMX than when the perceived sensitivity is below expectations. The results are in line with Rupprecht's and her colleagues' findings on the relationship between ideal-actual sensitivity incongruence and CWB (2016) and meta-analytical correlations found by Judge, Piccolo and Ilies (2004) which revealed that consideration for followers (e.g., concern and respect) were stronger related to leadership outcomes than the organizational capacities of the leaders to structure the work of their followers. Regarding the other three positive ILTs traits, namely intelligence, dedication and dynamism, the results indicate that meeting followers' expectations, especially when they are extremely high or low, is the best way for leaders to develop a high-quality LMX with their followers. Despite the three ILTs traits being considered intrinsically positive, current results show that when followers' expectations are low and their perceptions are that leaders manifest those traits at higher levels, the perceived quality of their relationship is affected. This is in line with the needs-supply fit concept (Edwards et al.,1998) that explains that on the one hand, receiving too much of a kind inhibits other resources to be obtained and on the other, it creates a liability for the dyadic partner to reciprocate. Nevertheless, results indicate also that it is safer when the unfulfilled expectations are achieved at lower levels of expectations (i.e., when ideal < actual) than when they are achieved at higher levels (i.e., when ideal > actual). Concerning negative ILTs traits, current results indicate that tyranny of leaders should be low, irrespective of the level of followers' expectations. Even when followers' expectations are not fulfilled, it is better when the direction is such that expectations are above the actual tyranny of the leader. Regarding masculinity, present results indicate that for having a positive impact on LMX, followers' expectations must be met, irrespective of the level of expectations. In other words, if followers prefer masculine leaders, manifested masculinity enhances LMX, but so does when followers prefer low level of masculinity and leaders are perceived low on masculinity. Considering simultaneously currents findings related to Tyranny and Masculinity that indicate effects on LMX and the results obtained by Epitropaki and Martin (2004), that revealed no effect of the composite score of negative ILTs traits on wellbeing, a possible explanation of different results is that when the effects of Tyranny and Masculinity on LMX are combined, as they were in the mentioned study, they could generate a destructive interference so that the cumulative effects of the two was less than either one of them taken individually.
Additionally, I found that ideal-actual sensitivity congruence had an indirect effect on work engagement, in line with the results obtained by Rupprecht and her colleagues (2016) and those found by Epitropaki and Martin (2005). Ideal-actual intelligence congruence had no effect on engagement, neither direct or indirect, revealing that, as expected, it might have predictive validity for other types of outcomes, such as performance. Dedication had only an indirect effect on engagement through LMX, but the total effect was insignificant, suggesting that other mechanisms inhibit the effect transmitted through LMX. Dynamism had a direct effect on engagement, but not an indirect one, again revealing that the impact on engagement is transferred through another mediating variable than LMX. Both tyranny and masculinity had indirect effects on engagement via LMX, but their total effects were insignificant suggesting that other mediating variables masked the effects transmitted through LMX.
To sum up, mediation results indicate that LMX has a mediating effect only for the relationship between ideal-actual sensitivity congruence and work engagement. The remaining ILTs traits can impact other outcomes than engagement, as I was speculating above that ideal-actual intelligence congruence can have a positive effect on job performance, or their indirect effects via LMX are inhibited by other explanatory mechanisms.
Although not all the hypotheses were supported, current results provide empirical arguments for exploring ILTs traits at the level of narrow traits, instead of broad dimensions.
Future studies should address additional outcomes, but also additional mediating mechanisms linking ideal-actual ILTs congruence to those outcomes. Identifying which ILTs trait may predict each outcome and whether some ILTs traits are more important than others within some specific populations or in specific settings, can help achieve a greater understanding of the impact of fulfilled expectations about leaders in work settings.
There are several limitations in this study. First, the results of this study should be interpreted carefully due to the small sample size. A larger sample would allow more relationships to be significant and higher confidence for the findings. Second, this study is cross-sectional in nature and the data was collected from a single source. Despite the design asks for self-assessment, longitudinal or experimental studies can be conducted in the future or address other variables that might be rated by other sources. Nevertheless, although the common method variance (CMV) may be concerning, Conway and Lance (2010) explained that most of the time CMV is just a perpetuated misconception and that same-source correlations might be closer to true scores than different-source correlations.
This study adds to the literature on ILTs in two important ways. First, it draws on the bandwidth-fidelity principle and revealed that addressing ILTs at the level of narrow traits provides additional theoretical and practical insights. Second, by using polynomial regression with response surface methodology, it showed nuanced effects of ideal-actual (in)congruence on LMX and engagement. Third, the current study adds to the Occupational Health Psychology (OHP) literature, by showing how the leaders' behaviors can affect the followers' OHP related outcomes. There are several practical implications of this study, as well. By showing that sensitivity of leaders is beneficial whenever is high and that tyranny of leaders should be low for a high quality LMX to pe perceived by followers, I provided valuable information for those in charge with selection and development programs for leaders. Additionally, by revealing non-linear relationships between the ideal-actual congruence for the other ILTs traits, the current study shifts the focus on the idea of matching leaders and followers based on their expectations in order to provide benefits both for followers and for organizations. Finally, training programs might be conducted in organizations aimed to adjust followers' mental models about effective leaders in a way that they are more adapted to organizational settings and less influenced by followers' personal histories.
The results of the current paper pave the way for future studies that address the unique effects of each ideal-actual ILTs trait congruence on other organizational outcomes. Additionally, the CMV limitation calls for future studies with dyadic design, which, on one hand, have the advantage of the multisource and, on the other, can tap into the dyadic effect of intra-personal and interpersonal ILTs and IFTs congruence on work outcomes.
In conclusion, this study expands the existing knowledge on ILTs and their impact on organizational outcomes by showing that to predict specific outcomes narrow ILTs traits should be considered and that, counterintuitively, some positive ILTs can be detrimental when they are too high and other negative ILTs traits are not always harmful. | 2020-11-26T09:06:13.690Z | 2020-11-09T00:00:00.000 | {
"year": 2020,
"sha1": "62bb0d855bb7456aa7daaa5818d5ecfdee06d269",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.hrp-journal.com/index.php/pru/article/download/475/466",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6aa22b89753e6347f567408ab040022088e04c01",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
118875306 | pes2o/s2orc | v3-fos-license | Investigation of Transport Properties for FeSe$_{1-x}$Te$_x$ Thin Films under Magnetic Fields
We investigated the transport properties under magnetic fields of up to 9 T for FeSe$_{1-x}$Te$_x$ thin films on CaF$_2$. Measurements of the temperature dependence of the electrical resistivity revealed that for $x = 0.2 - 0.4$, where $T_{\rm c}$ is the highest, the width of the superconducting transition increased with increasing magnetic field, while the width was almost the same with increasing magnetic field for $x = 0 - 0.1$. In addition, the temperature dependence of the Hall coefficient drastically changed between $x = 0.1$ and $0.2$ at low temperatures. These results indicate that clear differences in the nature of the superconductivity and electronic structure exist between $x=0-0.1$ and $x \ge 0.2$.
CaF 2 substrates by pulsed laser deposition (PLD). 10 T c for these films increases with decreasing x for 0.2 ≤ x ≤ 1 and reaches 23 K at x = 0.2. This value is 1.5 times higher than the highest value obtained for bulk samples. Surprisingly, we observed the sudden suppression of T c between x = 0.1 and 0.2. Therefore, it is of great interest to investigate the difference in the physical properties other than T c in these ranges of x.
In this letter, we will show the temperature dependence of the electrical resistivity under In this study, all of the films were grown by PLD with a KrF laser. FeSe 1−x Te x polycrystalline pellets (x = 0 − 0.8) were used as targets. 11 estimated from XRD measurements, are almost the same as those reported in our previous paper. 10 Thus, in this paper, we use the nominal value of the Te content of the target as the film composition. The film thicknesses in Table I 1(e) and 1(f), resistive broadening is observed with increasing magnetic field. These results suggest that the nature of superconductivity is different between Group A and Groups B and C.
It is important to discuss the origins of resistive broadening for FeSe 1−x Te x films with x ≥ 0.2. Before discussing this case, we recall the origin for high-T c cuprate, since resistive broadening is familiar in cuprates. 16 The origin is considered to be the result of superconducting fluctuations due to strong two-dimensionality. 17 To examine whether the same scenario as cuprates is applicable for FeSe 1−x Te x thin films, we focus on the anisotropy of the upper critical field, γ ≡ B //ab c2,0K / B //c c2,0K . Figure 2 shows the temperature dependence of the upper critical field B c2 along the ab plane and c-axis for the films with x = 0, 0.3, and 0.7. For FeSe 1−x Te x , the estimation of B c2 at 0 K from low-magnetic-field data by utilizing Werthamer-Helfand-Hohenberg (WHH) theory is very difficult because this theory does not take multiband materials into account. 18 For FeSe 1−x Te x , it is widely accepted that multiple bands, which originate from Fe 3d orbitals, cross the Fermi level. 19 Moreover, the value of B c2 at low temperatures is strongly suppressed by the Pauli paramagnetic effect. 20,21 However, in order to compare B c2 for each x within the orbital limit, we consider that the orbital limit inferred using conventional WHH theory is a first-step barometer in the discussion, and we estimate B c2 at 0 K using conventional WHH theory. Figure 3(a) shows the x dependence of B c2 at 0 K along the ab plane and c-axis. As well as T c for these films, the value of B c2 drastically changes between x = 0.1 and 0.2. The value of B c2 for x = 0.2 is more than twice that for x = 0.1.
Using these values, we estimate the anisotropy of the upper critical field γ. Figure 3(b) shows the x dependence of γ. The value of γ is 1.5 − 3 and not so different between x = 0.1 19,26,27 To be precise, we should take all of the bands into account. However, we adopt a two-carrier model including one electron-type carrier (with electron density n e and mobility µ e ) and one hole-type carrier (with hole density n h and mobility µ h ) for simplicity.
Using this model, the Hall coefficient R H , which is the slope of the Hall resistivity in the (1) Figure 5 shows the temperature dependence of R H for FeSe 1−x Te x thin films with x = 0 − 0.5. At room temperature, the sign of R H is positive for all films. Above 100 K, R H for x = 0 and 0.1 decreases as the temperature decreases, and below 100 K, it starts to increase rapidly. These results indicate that hole-type transport is dominant at low temperatures. The increase in R H may be related to the nematicity in FeSe. [28][29][30] In contrast, it has been reported that the sign of R H for FeSe single crystals is negative at low temperatures. 31 reported the temperature dependence of R H for films with x = 0.5, 26 and we proposed that T c strongly depends on the mobility of both electron-type and hole-type carriers. Judging from the behavior of R H for FeSe 1−x Te x thin films with x = 0 − 0.5 shown in Fig. 5, a higher T c is obtained when the mobilities of hole-type and electron-type carriers are comparable. This is consistent with our previous proposal. 26 The different behavior of R H (T ) for x ≤ 0.1 and x ≥ 0.2 is in good agreement with the dependence of T c on x. As was pointed out before, the sudden increase in R H below 100 K in films with x = 0 − 0.1 is likely the result of the change in the electronic structure derived from the nematic transition. Thus, our results suggest that the suppression of T c for x < 0.1 is due to the electronic nematicity. In order to further clarify the origin of the suppression of T c , it is important to measure the Hall resistivity under higher magnetic fields, the results of which will be discussed in a separate publication.
In conclusion, we have investigated the temperature dependence of the electrical resistiv- | 2016-08-04T01:34:28.000Z | 2016-06-07T00:00:00.000 | {
"year": 2016,
"sha1": "a403c3fe4574ddf175a419dd2edf3806aec6edba",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.01170",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a403c3fe4574ddf175a419dd2edf3806aec6edba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
7143991 | pes2o/s2orc | v3-fos-license | Exploitation in Affect Detection in Open-Ended Improvisational Text
We report progress on adding affect-detection to a program for virtual dramatic improvisation, monitored by a human director. We have developed an af-fect-detection module to control an automated virtual actor and to contribute to the automation of directorial functions. The work also involves basic research into how affect is conveyed through metaphor. The project contributes to the application of sentiment and subjectivity analysis to the creation of emotionally believable synthetic agents for interactive narrative environments.
Introduction
Improvised drama and role-play are widely used in education, counselling and conflict resolution. Researchers have explored frameworks for edrama, in which virtual characters (avatars) interact under the control of human actors. The springboard for our research is an existing system (edrama) created by one of our industrial partners, Hi8us Midlands, used in schools for creative writing and teaching in various subjects. The experience suggests that e-drama helps students lose their usual inhibitions, because of anonymity etc. In edrama, characters are completely human-controlled, their speeches textual in speech bubbles, and their visual forms cartoon figures. The actors (users) are given a loose scenario within which to improvise, but are at liberty to be creative. There is also a human director, who constantly monitors the unfolding drama and can intervene by, for example, sending messages to actors, or by introducing and controlling a minor 'bit-part' character to interact with the main characters. But this places a heavy burden on directors, especially if they are, for example, teachers and unpracticed in the directorial role. One research aim is thus partially to automate the directorial functions, which importantly involve affect detection. For instance, a director may intervene when emotions expressed or discussed by characters are not as expected. Hence we have developed an affect-detection module. It has not yet actually been used for direction, but instead to control an automated bit-part actor, EMMA (emotion, metaphor and affect). The module identifies affect in characters' speeches, and makes appropriate responses to help stimulate the improvisation. Within affect we include: basic and complex emotions such as anger and embarrassment; meta-emotions such as desiring to overcome anxiety; moods such as hostility; and value judgments (of goodness, etc.). Although merely detecting affect is limited compared to extracting full meaning, this is often enough for stimulating improvisation.
Much research has been done on creating affective virtual characters in interactive systems. Indeed, Picard's work (2000) makes great contributions to building affective virtual characters. Also, emotion theories, particularly that of Ortony, et al. (1988) (OCC), have been used widely therein. Egges et al. (2003) have provided virtual characters with conversational emotional responsiveness. However, few systems are aimed at detecting affect as broadly as we do and in openended utterances. Although Façade (Mateas, 2002) included processing of open-ended utterances, the broad detection of emotions, rudeness and value judgements is not covered. Zhe & Boucouvalas (2002) demonstrated emotion extraction using a tagger and a chunker to help detect the speaker's own emotions. But it focuses only on emotional adjectives, considers only first-person emotions and neglects deep issues such as figurative expression. Our work is distinctive in several respects. Our interest is not just in (a) the positive first-person case: the affective states that a virtual character X implies that it has (or had or will have, etc.), but also in (b) affect that X implies it lacks, (c) affect that X implies that other characters have or lack, and (d) questions, commands, injunctions, etc. concerning affect. We aim also for the software to cope partially with the important case of metaphorical conveyance of affect (Fussell & Moss, 1998;Kövecses, 1998).
Our project does not involve using or developing deep, scientific models of how emotional states, etc., function in cognition. Instead, the deep questions investigated are on linguistic matters such as the metaphorical expression of affect. Also, in studying how people understand and talk about affect, what is of prime importance is their common-sense views of how affect works, irrespective of scientific reality. Metaphor is strongly involved in such views.
Our Current Affect Detection
Various characterizations of emotion are used in emotion theories. The OCC model uses emotion labels (anger, etc.) and intensity, while Watson and Tellegen (1985) use positivity and negativity of affect as the major dimensions. Currently, we use an evaluation dimension (negative-positive), affect labels, and intensity. Affect labels plus intensity are used when strong text clues signalling affect are detected, while the evaluation dimension plus intensity is used for weak text clues. Moreover, our analysis reported here is based on the transcripts of previous e-drama sessions. Since even a person's interpretations of affect can be very unreliable, our approach combines various weak relevant affect indicators into a stronger and more reliable source of information for affect detection. Now we summarize our affect detection based on multiple streams of information.
Pre-processing Modules
The language in the speeches created in e-drama sessions severely challenges existing languageanalysis tools if accurate semantic information is sought even for the purposes of restricted affectdetection. The language includes misspellings, ungrammaticality, abbreviations (often as in text messaging), slang, use of upper case and special punctuation (such as repeated exclamation marks) for affective emphasis, repetition of letters or words also for affective emphasis, and open-ended interjective and onomatopoeic elements such as "hm" and "grrrr". In the examples we have studied, which so far involve teenage children improvising around topics such as school bullying, the genre is similar to Internet chat.
To deal with the misspellings, abbreviations, letter repetitions, interjections and onomatopoeia, several types of pre-processing occur before actual detection of affect.
A lookup table has been used to deal with abbreviations e.g. 'im (I am)', 'c u (see you)' and 'l8r (later)'. It includes abbreviations used in Internet chat rooms and others found in an analysis of previous edrama sessions. We handle ambiguity (e.g.,"2" (to, too, two) in "I'm 2 hungry 2 walk") by considering the POS tags of immediately surrounding words. Such simple processing inevitably leads to errors, but in evaluations using examples in a corpus of 21695 words derived from previous transcripts we have obtained 85.7% accuracy, which is currently adequate. We are also considering dealing with abbreviations, etc. in a more general way by including them as special lexical items in the lexicon of the robust parser we are using (see below).
The iconic use of word length (corresponding roughly to imagined sound length) as found both in ordinary words with repeated letters (e.g. 'seeeee') and in onomatopoeia and interjections, (e.g. 'wheee', 'grr', 'grrrrrr', 'agh', 'aaaggghhh') normally implies strong affective states. We have a small dictionary containing base forms of some special words (e.g. 'grr') and some ordinary words that often have letters repeated in e-drama. Then the Metaphone spelling-correction algorithm (http://aspell.net/metaphone/), which is based on pronunciation, works with the dictionary to locate the base forms of words with letter repetitions.
Finally, the Levenshtein distance algorithm (http://www.merriampark.com/ld.htm) with a contemporary English dictionary deals with spelling mistakes in users' input.
Processing of Imperative Moods
One useful pointer to affect is the use of imperative mood, especially when used without softeners such as 'please' or 'would you'. Strong emotions and/or rude attitudes are often expressed in this case. There are special, common imperative phrases we deal with explicitly, such as "shut up" and "mind your own business". They usually indicate strong negative emotions. But the phenomenon is more general.
Detecting imperatives accurately in general is by itself an example of the non-trivial problems we face. We have used the syntactic output from the Rasp parser (Briscoe & Carroll, 2002) and semantic information in the form of the semantic profiles for the 1,000 most frequently used English words (Heise, 1965) to deal with certain types of imperatives.
Rasp recognises some types of imperatives directly. Unfortunately, the grammar of the 2002 version of the Rasp parser that we have used does not deal properly with certain imperatives (John Carroll, p.c), which means that examples like "you shut up", "Dave bring me the menu", "Matt don't be so blunt" and "please leave me alone", are not recognized as imperatives, but as normal declarative sentences. Therefore, further analysis is needed to detect imperatives, by additional processing applied to the possiblyincorrect syntactic trees produced by Rasp.
If Rasp outputs a subject, 'you', followed by certain verbs (e.g. 'shut', 'calm', etc) or certain verb phrases (e.g. 'get lost', 'go away' etc), the sentence type will be changed to imperative. (Note: in "you get out" the "you" could be a vocative rather than the subject of "get", especially as punctuation such as commas is often omitted in our genre; however these cases are not worth distinguishing and we assume that the "you" is a subject.) If a softener 'please' is followed by the base forms of a verb, then the input is taken to be imperative. If a singular proper noun is followed by a base form of the verb, then this sentence is taken to be an imperative as well (e.g. "Dave get lost"). However, when a subject is followed by a verb for which there is no difference at all between the base form and the past tense form, then ambiguity arises between imperative and declarative (e.g. "Lisa hit me").
There is an important special case of this ambiguity. If the object of the verb is 'me', then in order to solve the ambiguity, we have adopted the evaluation value of the verb from Heise's (1965) compilation of semantic differential profiles. In these profiles, Heise listed values of evaluation, activation, potency, distance from neutrality, etc. for the 1,000 most frequently used English words. In the evaluation dimension, positive values imply goodness. Because normally people tend to use 'a negative verb + me' to complain about an unfair fact to the others, if the evaluation value is negative for such a verb, then this sentence is probably not imperative but declarative (e.g. "Mayid hurt me"). Otherwise, other factors implying imperative are checked in this sentence, such as exclamation marks and capitalizations. If these factors occur, then the input is probably an imperative. Otherwise, the conversation logs are checked to see if there is any question sentence directed toward this speaker recently. If there is, then the input is conjectured to be declarative.
There is another type of sentence: 'don't you + base form of verb' that we have started to address. Though such a sentence is often interrogative, it is also often a negative version of an imperative with a 'you' subject (e.g. "Don't you dare call me a dog," "Don't you call me a dog"). Normally Rasp regards it as a question sentence. Thus, further analysis has also been implemented for such a sentence structure to change its sentence type to imperative. Although currently this has limited effect, as we only infer a (negative) affective quality when the verb is "dare", we plan to add semantic processing in an attempt to glean affect more generally from "Don't you …" imperatives.
Affect Detection by Pattern Matching
In an initial stage of our work, affect detection was based purely on textual pattern-matching rules that looked for simple grammatical patterns or templates partially involving lists of specific alternative words. This continues to be a core aspect of our system but we have now added robust parsing and some semantic analysis. Jess, a rule-based Java framework, is used to implement the pattern/template-matching rules in EMMA.
In the textual pattern-matching, particular keywords, phrases and fragmented sentences are found, but also certain partial sentence structures are extracted. This procedure possesses the robustness and flexibility to accept many ungrammatical fragmented sentences and to deal with the varied positions of sought-after phraseology in speeches. However, it lacks other types of generality and can be fooled when the phrases are suitably embedded as subcomponents of other grammatical structures. For example, if the input is "I doubt she's really angry", rules looking for anger in a simple way will fail to provide the expected results.
The transcripts analysed to inspire our initial knowledge base and pattern-matching rules were derived independently from previous edrama improvisations based on a school bullying scenario. We have also worked on another, distinctly different scenario, Crohn's disease, based on a TV programme by another of our industrial partners (Maverick TV). The rule sets created for one scenario have a useful degree of applicability to other scenarios, though there will be a few changes in the related knowledge database according to the nature of specific scenarios.
The rules, as we mentioned at the beginning of this section, conjecture the character's emotions, evaluation dimension (negative or positive), politeness (rude or polite) and what response EMMA should make.
Multiple exclamation marks and capitalisation are frequently employed to express emphasis in e-drama sessions. If exclamation marks or capitalisation are detected in a character's utterance, then the emotion intensity is deemed to be comparatively high (and emotion is suggested even in the absence of other indicators).
A reasonably good indicator that an inner state is being described is the use of 'I' (see also Craggs & Wood (2004)), especially in combination with the present or future tense. In the school-bullying scenario, when 'I' is followed by a future-tense verb the affective state 'threatening' is normally being expressed; and the utterance is usually the shortened version of an implied conditional, e.g., "I'll scream [if you stay here]." Note that when 'I' is followed by a present-tense verb, a variety of other emotional states tend to be expressed, e.g. "I want my mum" (fear) and "I hate you" (dislike), I like you (liking). Further analysis of first-person, presenttense cases is provided in the following section.
Going Beyond Pattern Matching
In order to go beyond the limitations of simple pattern matching, sentence type information obtained from the Rasp parser has also been adopted in the pattern-matching rules. The general sentence structure information not only helps EMMA to detect affective states in the user's input (see the above discussion of imperatives), and to decide if the detected affective states should be counted, but also helps EMMA to make appropriate responses. Rasp will inform the pattern-matching rule with sentence type information. If the current input is a conditional or question sentence with affective keywords or structures in, then the affective states won't be valued. For example, if the input is "I like the place when it is quiet", Rasp works out its sentence type: a conditional sentence and the rule for structures containing 'like' with a normal declarative sentence label won't be activated. Instead, the rule for the keyword 'when' with a conditional sentence type label will be fired. Thus an appropriate response will be obtained.
Additionally, as we discussed in section 2.2, we use Rasp to indicate imperative sentences, such as when Mayid (the bully) said "Lisa, don't tell Miss about it". The pseudo-code example rule for such input is as follows: (defrule example_rule ?fact <-(any string containing negation and the sentence type is 'imperative') => (obtain affect and response from knowledge database) Thus the declarative input such as "I won't tell Miss about it" won't be able to activate the example rule due to different sentence type information. Especially, we have assigned a special sentence type label ('imp+please') for imperatives with softener 'please'. Only using this special sentence type label itself in the pattern-matching rule helps us effortlessly to obtain the user's linguistic style ('polite') and probably a polite response from EMMA as well according to different roles in specific scenarios.
Aside from using the Rasp parser, we have also worked on implementing simple types of semantic extraction of affect using affect dictionaries and electronic thesauri, such as WordNet. The way we are currently using WordNet is briefly as follows.
Using WordNet for a First Person Case
As we mentioned earlier, use of the first-person with a present-tense verb tends to express an affective state in the speaker, especially in discourse in which affect is salient, as is the case in scenarios such as School Bullying and Crohn's Disease. We have used the Rasp parser to detect such a sentence. First of all, such user's input is sent to the pattern-matching rules in order to obtain the speaker's current affective state and EMMA's response to the user. If there is no rule fired (i.e. we don't obtain any information of the speaker's affective state and EMMA's response from the pattern-matching rules), further processing is applied. We use WordNet to track down the rough synonyms of the verb (possibly from different Word-Net "synsets") in the verb phrase of the input sentence, in order to allow a higher degree of generality than would be achieved just with the use of our pattern-matching rules. In order to find the closest synonyms to the verb in different synsets, the semantic profiles of the 1,000 most frequently used English words (Heise, 1965) have been employed, especially to find the evaluation values of every synonym of the original verb. We transform positive and negative evaluation values in Heise's dic-tionary into binary 'positive' and 'negative' only. Thus if any synonym has the same evaluation value ('positive' or 'negative') as that of the original verb, then it will be selected as a member of the set of closest synonyms. Then, we use one closest synonym to replace the original verb in the user's input. This newly built sentence will be sent to the pattern-matching rules in order to obtain the user's affective state and EMMA's response. Such processing (using a closest synonym to replace the original verb and sending the newly built sentence to the pattern-matching rules) continues until we obtain the speaker's affective state and appropriate response.
Responding Regimes
EMMA normally responds to, on average, every Nth speech by another character in the e-drama session, where N is a changeable parameter (currently set to 3). However, it also responds when EMMA's character's name is mentioned, and makes no response if it cannot detect anything useful in the utterance it is responding to. The one-in-N average is achieved by sampling a random variable every time another character says something. As a future development, we plan to have N dynamically adjustable according to how confident EMMA is about what it has discerned in the utterance at hand.
EMMA sometimes makes a random response from several stored response candidates that are suitable for the affective quality it has discerned in an utterance it is responding to. In addition, EMMA sometimes reflects back, in modified form, part of the user's input string as a subcomponent of its response. Notice here that, because the pre-processing module reported in section 2.1 expands abbreviations and corrects misspellings, it helps to obscure the fact that part of EMMA's response is only a reflection. E.g.: <Character> I'm ur (your) best m8 (mate). <EMMA> Why are you my best mate? On the other hand, we also, on a random basis, purposely put phenomena such as abbreviations and Internet-chat features in EMMA's responses in order to make EMMA express itself more like a typical school-student user. E.g.: <Character> You get lost! <EMMA> Hey, coool it folks. (notice the lengthening of 'cool') <Character> He didn't do it. <EMMA> No1 believes you, m8. (where 'no1' stands for 'no one') It should be borne in mind that EMMA's responses are not aimed at engaging with the de-tailed meaning of the utterance, but simply to stimulate the improvisation in a way that is somewhat sensitive to affect being expressed. Furthermore, in order to make the EMMA character's responses push the improvisation forward, the character will not only ask scenario related questions to the main characters, but also introduce new topics closely related to the scenario in the improvisation. In a recent usertesting debrief session, secondary school students mentioned that the human bit-part character did not stay in character and said pointless things, while in another session one student, who played a main character, believed that the EMMA character was the only one that stuck to scenario related topics. The directors reported that, even when a main character was silent and the director did not intervene very much, the EMMA character led the improvisation on the right track by raising new topics other characters were concerned about.
Affect via Metaphor
In the introduction we commented on two functions of metaphor. Metaphor is often used to convey affect and it also partly underlies folk theories of how affect and emotion work. As an example of the latter, folk theories of anger often talk about, and appear to conceive of, anger as if it were a heated fluid possibly exerting a strong pressure on its containing body. This motivates a wide range of metaphorical expressions both conventional such as "he was boiling with anger and about to blow his top" and more creative variants such as "the temperature in the office was getting higher and this had nothing to do with where the thermostat was set" (modified, slightly from a Google™ search). Passion, or lack of, is also often described in terms of heat and the latter example could in certain contexts be used in this manner. So far, examples of actors reflecting or commenting on the nature of their or others emotions, which would require an appropriate vocabulary, have been infrequent in the e-drama transcripts, although we might expect to find more examples as more students participate in the Crohn's disease scenario.
However, such metaphorically motivated folk models often directly motivate the terminology used to convey affect, as in utterances such as "you leave me cold", which conveys lack of interest or disdain. This use of metaphor to motivate folk models of emotions and, as a consequence, certain forms of direct expression of emotion has been extensively studied, albeit usually from a theoretical, linguistic, perspective (Fussell & Moss, 1998;Kövecses, 1998).
Less recognised (although see Barnden et al., 2004;Wallington et al., 2006) is the fact that metaphor is also frequently used to convey emotion more indirectly. Here the metaphor does not describe some aspect of an emotional state, but something else. Crucially, however, it also conveys a negative or positive value judgement which is carried over to what is being described and this attitude hints at the emotion. For example to say of someone's room that "it is a cesspit" allows the negative evaluation of 'cess-pit' to be transferred to 'the room' and we might assume an emotion of disgust. In our transcripts we find examples such as "smelly attitude" and "you buy your clothes at the rag market" (which we take to be not literally true). Animal insults such as "you pig" frequently take this form, although many are now highly conventionalised. Our analysis of e-drama transcripts shows that this type of metaphor that conveys affect indirectly is much more common than the direct use.
It should be apparent that even though conventional metaphorical phraseology may well be listed in specialised lexicons, approaches to metaphor and affect which rely upon a form of lexical look-up to determine the meaning of utterances are likely to miss both the creative variants and extensions of standard metaphors and also the quite general carrying over of affectual evaluations from the literal meaning of an utterance to the intended metaphorical meaning.
At the time of writing (early June 2006) little in the way of metaphor handling has been incorporated into the EMMA affect-detection module. However, certain aspects of metaphor handling will be incorporated shortly, since they involve extensions of existing capabilities. Our intended approach is partly to look for stock metaphorical phraseology and straightforward variants of it, which is the most common form of metaphor in most forms of discourse, including e-drama. However, we also plan to employ a simple version of the more open-ended, reasoning-based techniques described in the ATT-Meta project on metaphor processing (Barnden et al., 2004;Wallington et al., 2006).
As a first step, it should be noted that insults and swear words are often metaphorical. We are currently investigating specialised insult dictionaries and the machine-readable version of the OALD, which indicates slang.
Calling someone an animal of any sort usually conveys affect, but it can be either insulting or affectionate. We have noted that calling someone the young of an animal is often affectionate, and the same is true of diminutive (e.g., 'piglet') and nursery forms (e.g., 'moo cow'), even when the adult form of the animal is usually used as an insult. Thus calling someone 'a cat' or 'catty' is different from describing them as kittenish. Likewise, "you young pup" is different from "you dog". We are constructing a dictionary of specific animals used in slang and as insults, but, more generally, for animals not listed we can use WordNet and electronic dictionaries to determine whether or not it is the young or mature form of the animal that is being used.
We have already noted that in metaphor the affect associated with a source term will carry across to the target by default. EMMA already consults Heise's compilation of semantic differential profiles for the evaluation value of the verb. We will extend the determination of the evaluation value to all parts of speech.
Having the means to determine the emotion conveyed by a metaphor is most useful when metaphor can be reliably spotted. There are a number of means of doing this for some metaphors. For example, idioms are often metaphorical (Moon 1988). Thus we can use an existing idiom dictionary, adding to it as necessary. This will work with fixed idioms, but, as is often noted, idioms frequently show some degree of variation, either by using synonyms of standard lexis, e.g., 'constructing castles in the air' instead of 'building castles in the air', or by adding modifiers, e.g., 'shut your big fat mouth'. This variability will pose a challenge if one is looking for fixed expressions from an idiom dictionary. However, if the idiom dictionary is treated as providing base forms, with for example the nouns being treated as the head nouns of a nounphrase, then the Rasp parser can be used to determine the noun phrase and the modifiers of the head noun, and likewise with verbs, verbphrases, etc. Indeed, this approach can be extended beyond highly fixed expressions to other cases of metaphor, since as Deignan (2005) has noted metaphors tend to display a much greater degree of fixedness compared to non-metaphors, whilst not being as fixed as what are conventionally called idioms.
There are other ways of detecting metaphors which we could utilise. Thus, metaphoricity signals (as in Goatly, 1997;Wallington et al., 2003) signal the use of a metaphor in some cases. Such signals include phrases such as: so to speak, sort of, almost, picture as. Furthermore, semantic restriction violations (Wilks, 1978;Fass, 1997;Mason, 2004), as in "my car drinks petrol," often indicate metaphor, although not all metaphors violate semantic restrictions. To determine whether semantic restrictions are being violated, domain information from ontologies/thesauri such as WordNet could be used and/or statistical techniques as used by Mason (2004).
User Testing
We conducted a two-day pilot user test with 39 secondary school students in May 2005, in order to try out and a refine a testing methodology. The aim of the testing was primarily to measure the extent to which having EMMA as opposed to a person play a character affects users' level of enjoyment, sense of engagement, etc. We concealed the fact that EMMA was involved in some sessions in order to have a fair test of the difference that is made. We obtained surprisingly good results. Having a minor bit-part character called "Dave" played by EMMA as opposed to a person made no statistically significant difference to measures of user engagement and enjoyment, or indeed to user perceptions of the worth of the contributions made by the character "Dave". Users did comment in debriefing sessions on some utterances of Dave's, so it was not that there was a lack of effect simply because users did not notice Dave at all. Also, the frequencies of human "Dave" and EMMA "Dave" being responded to during the improvisation (sentences of Dave's causing a response divided by all sentences said by "Dave") are both roughly around 30%, again suggesting that users notice Dave. Additionally, the frequencies of other side-characters being responded to are roughly the same as the "Dave" character -"Matthew": around 30% and "Elise": around 35%.
Furthermore, it surprised us that no user appeared to realize that sometimes Dave was computer-controlled. We stress, however, that it is not an aim of our work to ensure that human actors do not realize this. More extensive, user testing at several Birmingham secondary schools is being conducted at the time of writing this paper, now that we have tried out and somewhat modified the methodology.
The experimental methodology used in the testing is as follows, in outline. Subjects are 14-16 year old students at local Birmingham schools. Forty students are chosen by each school for the testing. Four two-hour sessions take place at the school, each session involving a different set of ten students. In a session, the main phases are as follows: an introduction to the software; a First Improvisation Phase, where five students are involved in a School Bullying improvisation and the remaining five in a Crohn's Disease improvisation; a Second Improvisation Phase in which this assignment is reversed; filling out of a questionnaire by the students; and finally a group discussion acting as a debrief phase. For each improvisation, characters are pre-assigned to specific students. Each Improvisation Phase involves some preliminaries followed by ten minutes of improvisation proper.
In half of the SB improvisations and half of the CD improvisations, the minor character Dave is played by one of the students, and by EMMA in the remaining. When EMMA plays Dave, the student who would otherwise have played him is instructed to sit at another student's terminal and thereby to be an audience member. Students are told that we are interested in the experiences of audience members as well as of actors. Almost without exception students have appeared not to have suspected that having an audience member results from not having Dave played by another student. At the end of one exceptional session some students asked whether one of the directors from Hi8us was playing Dave.
Of the two improvisations a given student is involved in, exactly one involves EMMA playing Dave. This will be the first session or the second. This EMMA-involvement order and the order in which the student encounters SB and CD are independently counterbalanced across students.
The questionnaire is largely composed of questions that are explicitly about students' feelings about the experience (notably enjoyment, nervousness, and opinions about the worth of the dramatic contributions of the various characters), with essentially the same set of questions being asked separately about the SB and the CD improvisations. The other data collected are: for each debrief phase, written minutes and an audio and video record; notes taken by two observers present during each Improvisation Phase; and automatically stored transcripts of the sessions themselves, allowing analysis of linguistic forms used and types of interactivity. To date only the non-narrative questionnaire answers have been subjected to statistical analysis, with the sole independent variable being the involvement or otherwise of EMMA in improvisations.
Conclusion and Ongoing Work
We have implemented a limited degree of affectdetection in an automated bit-part character in an e-drama application, and fielded the actor successfully in pilot user-testing. Although there is a considerable distance to go in terms of the practical affect-detection that we plan to implement, the already implemented detection is able to cause reasonably appropriate contributions by the automated character. We also intend to use the affect-detection in a module for automatically generating director messages to human actors.
In general, our work contributes to the issue of how affect/sentiment detection from language can contribute to the development of believable responsive AI characters, and thus to a user's feeling of involvement in game playing. Moreover, the development of affect detection and sentiment & subjectivity analysis provides a good test-bed for the accompanying deeper research into how affect is conveyed linguistically. | 2014-07-01T00:00:00.000Z | 2006-07-22T00:00:00.000 | {
"year": 2006,
"sha1": "fe3bce44bdba9c7a75ba388e93b5841c0befde37",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1654648&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "fe3bce44bdba9c7a75ba388e93b5841c0befde37",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119440547 | pes2o/s2orc | v3-fos-license | Probing neutrino and Higgs sectors in $SU(2)_1 \times SU(2)_2 \times U(1)_Y$ model with lepton-flavor non-universality
The neutrino and Higgs sectors in the $\mbox{SU(2)}_1 \times \mbox{SU(2)}_2 \times \mbox{U(1)}_Y $ model with lepton-flavor non-universality are discussed. We show that active neutrinos can get Majorana masses from radiative corrections, after adding only new singly charged Higgs bosons. The mechanism for generation of neutrino masses is the same as in the Zee models. This also gives a hint to solving the dark matter problem based on similar ways discussed recently in many radiative neutrino mass models with dark matter. Except the active neutrinos, the appearance of singly charged Higgs bosons and dark matter does not affect significantly the physical spectrum of all particles in the original model. We indicate this point by investigating the Higgs sector in both cases before and after singly charged scalars are added into it. Many interesting properties of physical Higgs bosons, which were not shown previously, are explored. In particular, the mass matrices of charged and CP-odd Higgs fields are proportional to the coefficient of triple Higgs coupling $\mu$. The mass eigenstates and eigenvalues in the CP-even Higgs sector are also presented. All couplings of the SM-like Higgs boson to normal fermions and gauge bosons are different from the SM predictions by a factor $c_h$, which must satisfy the recent global fit of experimental data, namely $0.995<|c_h|<1$. We have analyzed a more general diagonalization of gauge boson mass matrices, then we show that the ratio of the tangents of the $W-W'$ and $Z-Z'$ mixing angles is exactly the cosine of the Weinberg angle, implying that number of parameters is reduced by 1. Signals of new physics from decays of new heavy fermions and Higgs bosons at LHC and constraints of their masses are also discussed.
I. INTRODUCTION
One of the most important purposes of the LHC is to search for manifestations of new physics (NP). It seems that some clues have appeared with massive neutrinos and recent observations of lepton-flavor non-universality (LNU). Recall that the lepton family replication is assumed in the Standard Model (SM). Therefore, the lepton-flavor is universal in the latter. For the recent two decades, neutrino and Higgs physics are hot topics in Particle Physics. With increasing luminosity and beam energy, the LHC becomes a powerful tool for searching for NP. With larger masses, the third generation seems to be more interesting, in the sense of the sensitivity to NP. Nowadays, there are two kinds of anomalies in the semileptonic B meson decays which are captivating for the LNU. The first one is the class of the following ratios of branching fractions: which show 3.5 σ deviations from the corresponding SM predictions [1], R D * = 0.252 ± 0.004 , R D = 0.305 ± 0.012.
The above results provide hints for violation of the lepton flavor universality (LFU).
From the physical point of view, the mass of a particle plays a quite important role in its characteristic properties. To justify this, let us mention some well-known examples. The first is that the proton and neutron have a tiny mass difference (940 vs 938) MeV, but the proton is long-lived while the neutron is unstable with its mean lifetime of just under 15 min (881.5 ± 1.5 s). The second example is the situation with the electron and the muon.
Both particles are leptons with just a mass difference (0.511 vs 105.6) MeV, the electron is 1 The SM value for R K has been first obtained in Ref. [3] stable while the muon is unstable with the mean lifetime of 2.2 µs. So one may expect that the third generation of quarks and leptons where particles are heavier, has to be different from the first two ones. Within this context, the above data showing the LNU look quite understandable. In other words it is quite natural to expect that the third fermion generation is more strongly coupled to some New Physics than the first two ones. Recently the R D and R D * were subjects of intensive studies mostly in scalar leptoquark models [4,5].
One of the beyond the SM models satisfying the recent experimental data of LNU is the model based on the SU(2) 1 × SU(2) 2 × U(1) Y (G221) gauge group [6] (more kinds of G221 models can be found in Ref. [7]). In Ref. [6] the authors have mainly concentrated on explanation of LNU in the lepton sector. But at present, any theoretical model in particle physics has to deal with neutrino masses, the baryon asymmetry of the universe (BAU) and the dark matter (DM).
The aim of this work is to study further details in the gauge, Higgs and neutrino sectors of the model presented in Ref. [6]. We will show that the problems of the active neutrino mass and DM in this model can be solved without any changes of results of allowed parameter regions satisfying all constraints of the flavor physics, tau decays, electroweak precision data, and recent anomalies in B decays, which were indicated in Ref. [6]. In particular, the active neutrinos get Majorana masses from radiative corrections, where new lepton-number violating interactions have to be introduced. The simplest way is the Zee method [8], where a pair of singly charged scalars transformed as singlets under both the SU(2) gauge groups is introduced. Like in the Zee models, where a second SU(2) L Higgs doublet is necessary for creating a nonzero triple coupling of two Higgs doublets and a singly charged Higgs singlet, the SU(2) 1 Higgs doublet φ ′ in this G221 model plays the role of the second SU(2) L Higgs doublet. Hence, no new breaking scales need to appear, implying that there are no new mass terms contributing to the fermion and gauge boson sectors. This explains why all results investigated in Ref. [6] are unchanged, therefore we can use them to study the coupling properties of the Higgs and gauge bosons with fermions. In addition, it suggests that the ways of generating active neutrino masses in many recent radiative neutrino mass models can be applied to the G221 model. Many of these models have DM candidates that are neutral fermion singlets and have odd charges under a new Z 2 symmetry. To avoid complicate Higgs sectors, where just new charged Higgs bosons are included, we will not pay much attention to models solving the DM problems in this work. We will discuss in detail the mechanism of generating neutrino masses from the Zee mechanism, and the Higgs potential with the appearance of two singly charged Higgs singlets. In the gauge boson sector, we will apply general method to diagonalize neutral and charged gauge boson sectors, and from this we get a consequence that the tangents of the mixing angles in two sectors are proportional.
This will reduce the number of the model parameters by 1. In the Higgs sector, the physical Higgs spectrum is presented. Then the SM-like Higgs boson and its couplings to other SMlike particles are identified and compared with the SM predictions. A comparison between properties of the Higgs spectrum in the G221 model and the minimal supersymmetric model (MSSM) and two Higgs doublet models (THDM) will also be discussed in this work. Based on these properties and the constraints of parameters given in [6], we will discuss the bounds of new Higgs boson masses as well as promoting decay channels of the Higgs bosons and fermions that can be searched for at modern colliders such as the LHC.
This paper is organized as follows. In section II, after a brief review of the model, we present a more careful consideration of charged lepton masses and the Zee method for generation of neutrino masses. In the subsection II B we suggest two possibilities of appearance of DM candidates in the G221 model. The first is based on a radiative neutrino mass model introduced previously. This way will not change the results of parameter constraints in Ref. [6]. The second way is different, because a new scalar SU (2) charged Higgs bosons are also discussed. In section V, we review briefly the allowed regions of parameters given in [6], which resulted from a specific numerical illustration in the limit of two vector-like fermion generations and simple textures of Yukawa couplings. Following the searches for new heavy particles at the LHC, we use these allowed regions to investigate lower bounds of masses and promoting decay channels of new fermions and Higgs bosons predicted by this model. Conclusions are given in the last section VI.
II. BRIEF REVIEW OF THE MODEL
The model is based on the gauge group SU(2) 1 ×SU(2) 2 ×U(1) Y with the following gauge couplings, fields and generators [6]: where i = 1, 2, 3 is the SU(2) index. All the chiral fermions transform as where the numbers in brackets refer to SU(3) C , SU(2) 1 , SU(2) 2 , and the hypercharge. The electric charge operator is determined in the form For the subgroup SU(2) 1 there are n V L generations of vector-like fermions which are transformed as its doublets, while they are singlets for the SU(2) 2 , The vector-like fermion generation number is greater than one in order to explain successfully the LNU, and it was fixed by n V L = 2 for simplicity in numerical illustration [6].
The Higgs sector consists of two doublets φ and φ ′ and one self-dual bidoublet Φ (i.e., with components as withΦ 0 = (Φ 0 ) * . The scalar fields develop VEVs The spontaneous symmetry breaking (SSB) of the model follows the pattern The main phenomenology of the model concerned B-decay anomalies and the lepton-flavor non-universality has been presented in [6]. However, the current physical model has to satisfy Higgs and neutrino physics as well as DM candidate.
With the above breaking chain, the VEVs are assumed to satisfy the relation Yukawa Lagrangian, fermion mass matrices, and diagonalization steps to construct physical states and masses of fermions were presented in detail in [6]. Hence, we will summarize here only important results and focus on new features of generating active neutrino masses from loop corrections.
A. Charged fermion masses
The chiral fermions couple to the SM Higgs-like φ doublet whereφ ≡ iσ 2 φ * . The matrices y d , y u , y ℓ are 3 × 3 matrices. The vector-like fermions can have gauge-invariant Dirac mass terms Other contributions are where λ † q, ℓ andỹ u, d, ℓ are n V L × 3 Yukawa matrices. After the SSB, the above couplings will induce mixing between the vector-like and the SM chiral fermions. This is crucial for the phenomenology of the model.
For the sake of simplicity one can assume a softly broken discrete Z 2 symmetry under which only φ ′ is odd, making unnecessary Yukawa couplings vanish, i.e.,ỹ u, d, ℓ ≃ 0 [6]. There is another charge assignment that also forbids Lagrangian in (16), while keeps φ ′ even: only Q L and L L are odd. This is necessary for generating active neutrino masses by the Zee method considered in this work.
We combine the chiral and vector-like fermions as where i = 1, 2, 3, k = 1, · · · , n V L and I = 1, · · · , 3 + n V L . After the SSB, the fermion mass Lagrangian has the form Here, all above mass matrices are (3 + n V L ) × (3 + n V L ) and have the form In the limit ǫ = v/u ≪ 1, these matrices are blocked-diagonalized perturbatively via two steps. After that, the SM parts are separated from the total. The transformations of fermion states are: , and W f are (3 + n V L ) × (3 + n V L ) unitary matrices [6]. At the first step where v = 0, every M F (F = U, D, E) is diagonalized by an exact V F depending on u, M F and λ ℓ,q . At the second step, transformations V f and W f are expanded in terms of power series of ǫ, V f = 1 + iǫ 2 H f V + ... and W f = 1 + iǫH f W + 1/2(iǫH f W ) 2 + .... They were listed precisely in [6]. After the two steps, all original mass matrices in (19) will be transformed into block-diagonal formsM F = V f V F M F W † f . One of the blocks in everyM F is identified with a SM fermion block, which is diagonalized by 3 × 3 Only the CKM matrix, V CKM = S u S † d , appears in the gauge couplings [6]. We can fix S e = U e = I 3 .
For studying Higgs boson phenomenology satisfying the allowed regions of parameters given in [6], which resulted from a specific assumption of two new lepton families and textures of Yukawa couplings λ q,ℓ , we will present more detailed masses and eigenstates of charged leptons. The quark sector can be derived similarly. In the flavor basis E of charged leptons, the mass matrix M E in (19) is 5 × 5. Following Ref. [6], a simple texture of λ ℓ is chosen as where new parameters ∆ µ and ∆ τ will be considered as free parameters; while M L 1 , M L 2 are "reduced" masses of new charged leptons, m E k ≃ u M L k [6], We recall here important properties of charged lepton parameters used in constructing radiative active neutrino masses. According to [6], physical masses (
the mass bases of left-and right-handed leptons E
. Non-diagonal elements of V L may be large because those of M E are at the SU(2) 1 scale. In contrast, those of V e and W e are at least one order of v φ u , because these elements of V L M E are order of the electroweak scale. Hence, V e and W e are nearly identical when u ≫ v φ . They only play the role of generating light charged lepton masses of e, µ, and τ . Hence, in many cases we can use the approximations We can see that the V L is exactly the mixing matrix of neutrinos if they are all considered as the pure Dirac particles. Formula of V L is written in the block form, namely [6] where analytic expression of V ij L , with i, j = 1, 2, corresponding to λ ℓ in Eq. (20) are given in Appendix A. The Yukawa coupling matrix y ℓ (13) is also mentioned, with a requirement that the SM block of the charged leptons is diagonal after the block-diagonalization. It does not affect results obtained in Ref. [6], which depend mainly on the gauge couplings.
Hereafter, many calculations to discuss on phenomenology of Higgs bosons will ignore small mixing between different flavor quarks. We will apply the same results of the charged lepton sector to the quarks. The equivalences between notations are: V ij L , λ ℓ , M L 1,2 , ∆ µ,τ → V ij Q , λ q , M Q 1,2 , ∆ b,s , which were given in [6]. Next, we will discuss another possibility that neutrinos can get Majorana mass terms.
B. Neutral lepton masses
Unlike charged leptons, where the SM-like charged leptons have their own right-handed partners, the SM-like neutrinos do not. In addition, the neutral leptons may inherit Majorana mass terms, for example 1 2 (ν L ) c m ν ν L for active neutrinos. Hence, it is more convenient to write the mass matrix of neutral leptons in the form discussed in the seesaw models [9], which is different from [6]. At the beginning ν L , N R and N L will be considered as independent fields, where the left-and right-handed bases are N ′I . For n VL = 2, the mass matrix of the neutral leptons is a 7 × 7 symmetric matrix having the following form: where m D ≡ 1 2 λ ℓ u and M L are 3 × 2 and 2 × 2 matrices, respectively. Similarly to seesaw models, (N L ) c and N R are additional right-handed neutrinos. The matrix (23) A pair of two degenerate values corresponds to one Dirac mass of a heavy Dirac neutrino, the same as that mentioned in [6]. The mixing matrix of neutrinos is derived from (22) as follows: where new neutrino masses are pure Dirac. In addition, new lepton masses in each family are nearly degenerate. Equation (24) gives the relations between the original and mass bases R , which are the same as those of the charged leptons. To keep the lepton spectrum being unchanged and looking for a solution of active neutrino mass problem, the mass terms of active neutrinos must come from the effective Majorana Because the active neutrino masses are tiny, their effect on the mixing parameters with heavy neutrinos is negligible. Based on the mechanism of the neutrino mass generation in the Zee model [8], in this model only one pair of new singly charged Higgs bosons, denoted as δ ± ∼ (1, 1, 1) ±1 carrying even Z 2 charges, is introduced.
New couplings for generating one-loop radiative neutrino masses are where i, j = 1, 2, 3; and k, l = 1, 2. In the general case, k, l = 1, 2, .., n V L . We stress that all terms in (25) are simultaneously survival only when both φ ′ and δ ± are even under Z 2 symmetry.
The terms in the first line of (25) violate the lepton numbers, exactly in the same way as in the Zee model, where φ ′ plays a similar role as the second Higgs doublet. Similarly to the Zee model, the trilinear coupling is λ δ u after the first step of the spontaneous breaking. An one-loop diagram generating active neutrino masses is shown in Fig. 1. Following [8,10], the effective mass matrix of light neutrinos is derived in Appendix B, where ϕ ± and δ ± are assumed to be the physical Higgs bosons. But in the model under consideration, ϕ ± and δ ± are not mass eigenstates. As we will discuss later, the physical fields in the Higgs sector are h ± 1,2 , and there are some useful relations: The parameters c ξ , s ξ , c ζ , and s ζ involve with mixing parameters ξ and ζ of the Higgs bosons, defined in Eqs. (60) and (76), as we will present below. The Higgs (25) can be rewritten as follows: where v φ = vs β , v φ ′ = vc β and t β ≡ tan β = s β /c β , which are defined in [6].
Charged leptons e c in the loop will be considered as mass eigenstates with masses m ec .
Therefore, the Yukawa terms should be written in terms of physical charged lepton states, and light neutrinos are massless states after the rotation V L . For simplicity, we will assume that V 11 L is real and V e , W e ≃ I. In addition, we ignore one-loop contributions to heavy neutrino masses because they are extremely smaller than the tree level masses. Then the one-loop corrections are mainly from light leptons, namely Here a, c, g, h = 1, 2, 3; I, J = 1, 2, ..., 7; f and f ′ are 3 × 3 and 2 × 2 antisymmetric matrices, respectively. Similarly, we have The effective mass matrix m ν of active neutrinos is derived based on (B3), where M e ≡ diag(m e , m µ , m τ ).
Including loop contributions (27), the SM block of (24) will be changed from zero into consisting of three active neutrino masses, and U PMNS is the well-known neutrino mixing matrix. If V 11 L = I 3 and V 12 L = 0, the Eq. (27) is the same (as B3). Like in the Zee models, the parameters arising from the Higgs sector affect the order of the neutrino masses only. But the masses and mixing angles of active neutrinos depend on unknown parameters in f, f ′ , and V 11 L . As a result, the model under consideration is less restrictive in fitting the neutrino data than the Zee models. Because these models are still valid [11], the neutrino sector mentioned here is realistic. In general, fitting recent neutrino data needs at least five free parameters, in agreement with three mixing angles and two squared mass differences. Because two of four parameters, namely m 0 and three f ab , determine the order of the lightest neutrino mass, there are two free parameters left. When n V L ≥ 3, there are at least three additional parameters f ′ kl , enough for fitting neutrino data without constraints on V 11 L . Interestingly, the neutrino fitting results in [11] would be applied to the model under consider ration if L L carries even Z 2 charge which will survive the lepton coupling matrixỹ ℓ in (16).
L may be involved with fitting neutrino data. Our numerical investigation showed that the allowed regions in Ref. [6], controlled by the texture λ ℓ (20), seems much more constrained.
Note that the m ν in (27) keep only main contributions from loops containing light charged leptons, where mixing terms with order O(ǫ) are ignored. With u around 1 TeV and light new charged leptons, contributions from these lepton mediations to m ν will be significant, implying that their masses can be free parameters for fitting neutrinos data without much changes of ∆ µ,τ . Finding exact allowed regions should be done elsewhere.
When the neutrino data is fitted, the results in Ref. [6] for B-decay anomalies are still unchanged because the analysis considered here addresses only effects of tree contri-butions from heavy gauge bosons, where other contributions from the light lepton masses are suppressed. The unique changes may come from the gauge couplings of active neutrinos with charged gauge bosons. Following [6], after the block-diagonalization these gauge couplings are proportional to W µ l ν L γ µ e L and W µ h ν L γ µ ∆ ℓ e L , where W l and W h are light and heavy charged gauge bosons. In the neutrino mass basis they become W µ l ν L U † PMNS γ µ e L and W µ h ν L U † PMNS ∆ ℓ γ µ e L , resulting in the same factor (U † PMNS ) ii for coupling ν i e i with a diagonal ∆ ℓ obtained from the texture of λ ℓ in (20). This factor will not appear in final results presenting the ratios of B-decay anomalies, as given in [6].
In general, active neutrino mass generation from radiative corrections mentioned above affects only the lepton sector. Furthermore, it does not affect mixing parameters controlling the λ ℓ structure at the first breaking step, hence suggests that the orders of numerical values in allowed regions will not change after neutrino data is fitted.
The above discussion just refers to a simple extension that can generate active neutrino masses through radiative corrections. The problems of neutrino masses and DM can be solved by models with more charged Higgs bosons and singlet right-handed neutral leptons, such as [12]. Following the structures of these models, apart from δ ± , at least one pair of singly charged Higgs bosons S ± and a neutral lepton F R ∼ (1, 1, 1) 0 have to be introduced, where S + ∼ (1, 1, 1) 1 . In addition, only S ± and F R are odd under a new Z 2 discrete symme- Therefore, F R can play the role of DM. It has a Majorana mass term of the form 1 2 (F R ) c m F F R . Active neutrinos get mass from loop corrections, which arise from a new Yukawa term, , and a coupling of charged Higgs bosons, 1 4 . This kind of models seems to be less interesting because the origin of neutrino masses is not related to the new leptons.
New ingredients for generating radiative corrections to active neutrinos do not change both results of the gauge sector and LNU discussed in [6], because no new breaking scale contributes to the masses of gauge Higgs bosons. If we add a new SU(2) 1 triplet, denoted as ∆ ∼ (1, 3, 1) 1 , creating an Yukawa term like −Y ∆ (L L ) c iσ 2 ∆L L + H.c., a neutral component of this triplet will develop a non-zero VEV v ∆ , which contributes a new mass term of the to the neutrino mass matrix (23). This matrix has the same form shown in the inverse seesaw models [9,13]. Hence the active neutrino masses will be non-zero. In addition, some new neutrinos may get light masses and play the role of DM [14]. These models seem interesting because they may give connections between the SU(2) 1 leptons with neutrino masses and DM. But the appearance of the new vev v ∆ will contribute to masses and mixing parameters of the Higgs and gauge bosons, consequently it will affect the results shown in [6]. This extension is beyond our scope, and should be thoroughly studied in another work.
Now we turn to one of the most important elements: gauge bosons.
C. Gauge boson masses
Gauge boson masses arise from the piece where the covariant derivative of Φ is determined as With the help of the notation contributions to masses of gauge bosons are From this, masses and eigenstates of gauge bosons can be found in agreement with those presented in Ref. [6]. We will review important aspect then discuss some new properties when masses and mixing angles are calculated up to order of O(ǫ 2 ).
D. Neutral gauge bosons
In the basis (W 1 3 , W 2 3 , B) the squared mass matrix of neutral gauge boson is M 2 nb . At the first step, where v φ , v φ ′ → 0, only two states W 1 3 and W 2 3 are rotated through a rotation C 1 where n 1 = n 2 1 + n 2 2 was used already in [6]. The first breaking step implies the following transformation of the neutral gauge bosons: [6]. Note that v ≃ 246 GeV, g ′ = gs W /c W , and s W is the sine of the Weinberg angle. From now on, n 1 , g 1 , g 2 and n 2 will be written as At the second step, the mixing matrix C 2 is the SM rotation of only B and W 3 , giving new where A and Z l are the photon and SM gauge boson. The respective matrix M ′2 nb is The mass eigenstates (Z, Z ′ ) relates with the Z l − Z h mixing angle defined as where ǫ ≡ v u . The Z l − Z h mixing vanishes when β ′ = β, where tan β = s β /c β . The masses of the physical eigenstates (Z, Z ′ ) are The relation between the two bases (W 1 , W 2 , B) and (A, where (C 2 C 1 ) T is the first matrix in the right hand side of (35).
Using the new notations of (32), the parameter ζ in [6] can be expressed as ζ ≡ s 2 β − 2β ′ u 2 and g n 2 s β ′ , we can deduce an approximate form ξ Z ≃ 1/2 tan 2ξ Z in the limit ǫ ≪ 1, consistent with the expression of tan 2ξ Z shown in (34).
E. Charged gauge bosons
In the basis (W + 1 , W + 2 ) the squared mass matrix of charged gauge bosons was given in Ref [6]. Setting v = 0, we can define a new basis: where the corresponding squared mass matrix is The SM-like boson W ± is identified with W ± ≡ W ± l with mass m W l = gv/2. The mixing W + l − W + h is defined through the mixing angle ξ W satisfying From (37), it follows that the ratio of the tangents of W − W ′ and Z − Z ′ mixing angles is c W . This will reduce the number of parameters in the model by 1.
The physical mass eigenstates (W ± , W ′± ) are given by with c ξ W ≡ cos ξ W , s ξ W ≡ sin ξ W , and masses Note that Z and W are the SM-like gauge bosons.
We will derive the approximate formulas for the mixing angles and masses of the SMlike gauge boson up to the order of v 2 × O(ǫ 2 ) because the corrections at this order to the masses may contribute significantly to precision tests such as the ρ parameter. Because (34) and (37), we get This means that s 2 For this reason, the masses of the gauge bosons in (35) and (39) can be written as Then we have In addition, at the tree level the ρ parameter satisfies Using e = g ′ c W = gs W , etc., expression in (42) gives the well-known electromagnetic current where Q is the electric charge operator defined in (6).
Neutral currents are defined as where J µ (Z l ) and J µ (Z h ) can be found from (43) and (44). Remind that physical neutral gauge bosons are Z and Z ′ defined from the Z l − Z h mixing angle (34), leading to the respective neutral currents and The second term in (45) is the NP contribution.
Let us write explicitly the neutral current of the Z boson where T 1 3 = 0 and T 2 3 = 1 2 σ 3 for the SM fermion doublets, and T 1 3 = 1 2 σ 3 , T 2 3 = 0 for the extra fermion doublets. Only interactions in the first line of (47) are the SM ones. The remaining provides NP effects. Note that interactions of new vector-like fermions include both P L and P R parts (as vector).
Let us write the couplings of the Z boson with physical fermion states in the form where P L,R = (1 ∓ γ 5 )/2. The couplings g L and g R are listed in Table I, where we denote Here we keep only significant contributions to g R , in which they contain both factors of heavy masses and ǫ, as shown in the two last lines of Table I. We can see that although new fermions are all vector-like in the flavor bases, they are not vector-like in the mass bases because they are mixed with the chiral SU(2) 2 leptons through Yukawa interactions (16).
In contrast to [6], in our work the neutral currents are written in the basis of physical neutral gauge bosons, SM Z and extra Z ′ , from which their decays can easily be studied.
A. Charged currents
The Lagrangian of charged currents is In the physical states of charged gauge bosons, it is If the W boson part of the Lagrangian is written as L = gc ξ W √ 2 W µ f γ µ (g L P L + g R P R ) f ′ + H.c., the couplings of W boson with physical fermions are shown in Table II. New-physics interactions are in (51). Within the experimental data on the W decay width, ones can get constraints on the mixing angles. That was discussed in detail in Ref. [6].
From
the potential is given as Because the µ parameter is proportional to the squared masses of the charged and CP-odd Higgs bosons, it must be positive with the minus sign before it in the potential (54).
The neutral scalars are expanded as At the tree level, the minimum conditions of the Higgs potential are similar to the ones in Ref. [6], except the opposite signs of µ.
Based on the minimum conditions, the parameter µ 2 φ,φ ′ ,Φ can be expressed as a function of the Higgs-self couplings u, v and β. Next, the masses, mass eigenstates, and couplings of Higgs bosons will be calculated by inserting these functions into the Higgs potential (54).
A. Squared mass matrices of the Higgs bosons
In the original bases of singly charged and CP-odd neutral Higgs bosons φ ± = (ϕ ± , ϕ ′± , Φ ± ) T and A = (A φ , A φ ′ , A Φ ) T , the corresponding squared mass matrices are In the basis of CP-even Higgs bosons S = (S φ , S φ ′ , S Φ ) T the squared mass matrix M 2 S corresponding the mass term 1 The above matrices are consistent with those given in [6] after using the relations (56).
B. Physical spectrum of Higgs bosons and their couplings
We will find the Higgs bosons masses in two steps. At the first step, where v → 0, all the three squared mass matrices are diagonalized through the same transformation In the second step, it is easy to determine rotations diagonalizing the squared mass matrices of charged and CP-odd neutral Higgs bosons. By defining the mixing angle ζ as the total mixing matrices used to diagonalize mass matrices in (57) are Mass eigenstates of the charged and CP-odd Higgs bosons, denoted as H ± = (G ± 1 , G ± 2 , h ± ) T and H A = (G Z 1 , G Z 2 , h a ) T , relate with the original states through the following equations: Two linear combinations of G ± 1 and G ± 2 are Goldstone bosons eaten up by the W ′± and W ± gauge bosons. Similarly, linear combinations of G Z 1 and G Z 2 are eaten up by Z and Z ′ . There are two physical charged Higgs bosons h ± and one physical CP-odd neutral Higgs boson h a with masses m 2 h ± and m 2 A , respectively. They satisfy Regarding the CP-even neutral Higgs bosons, after the rotation (59), the squared mass matrix is M ′2 S = C 1 M 2 S C T 1 , which is a 3 × 3 matrix with following elements: In general, M ′2 S is complicated and it cannot be diagonalized exactly. Instead, using the parameter ǫ ≡ v/u ≪ 1, we will find approximate solutions for mass eigenvalues, keep terms up to the order of the electroweak scale. This is reasonable because the SM-like Higgs boson mass was found to be 125 GeV. Approximate solutions was used earlier to find consistent masses of the lightest CP-even neutral Higgs bosons in supersymmetric models [22]. The mixing matrix will also be determined approximately, corresponding to the mass eigenvalues.
We start from finding the eigenvalues of the matrix M ′2 S by solving the equation Det (M ′2 S − λI 3 ) = 0, where λ is expanded as λ = u 2 (λ 0 + λ 1 ǫ 2 ) to keep it up to the order of electroweak scale v. We assume that λ 0 , λ 1 ∼ O(1). Using v = uǫ, we can write where a 0 = a 0 (λ 0 ) and a 1 = a 1 (λ 0 , λ 1 ). We will consider only the two following equations: The first equation in (65) shows that the largest contributions to Higgs masses are the solutions of a 0 (λ 0 ) = 0, giving one zero and two non-zero values, λ 0 = µu 2s β c β and λ 0 = λ 3 u 2 . Hence there are two heavy CP-even neutral Higgs bosons with the corresponding masses , which equal the largest contributions of the two last diagonal entries of M ′2 S shown in (64). A light CP-even neutral Higgs boson corresponds to λ 0 = 0. Its mass comes from the second equation of (65): We stress that m 2 It can be checked that after this rotation the light Higgs boson mass is consistent with (66). Therefore, the mixing matrix relating two original and physical bases S and H 0 = The light Higgs boson h 0 1 is identified with the SM-like Higgs boson found by the LHC. The recent experimental data shows that the SM predictions perfectly agree with the observation within 1 sigma [16]. Hence, couplings of h 0 1 with other SM particles must be consistent with this data. The relevant couplings of h 0 1 are shown in Table III, including couplings with h ± 2 needed to generate active neutrino masses. We can see easily that all couplings with the SM-like particles are different from the SM predictions by a common factor c h . So, |c h | should be close to unity, i.e., |s h | should be small. Its upper bound can be found as follows. Consider the h 0 1 productions at LHC, new heavy quarks can play the roles of the top quark in gluon-gluon fusion mechanism, where their couplings are proportional to s h or ǫ. A significant contribution related to ǫ may come from the quarks U 2 where the couplings contain a factor (∆ b m t ) 2 ǫ/v 2 . But the constraint from [6] gives (∆ b m t ) 2 ǫ/v 2 ∼ 10 −4 ǫ, which is suppressed. Now, the production of h 0 1 through gluon-gluon fusion at lowest order is [23] σ 0 where t q = for SM-like quarks; new quarks U 1 , D 1 ; and new quarks U 2 , D 2 , respectively. The form factor where for t > 1.
Using m Q 1,2 , m Q 1,2 ≃ M Q 1,2 u, as given in [6], Eq. (69) is written as where t 0 = . The condition m t , m Q 1,2 > m h 0 1 = 125.09 GeV gives the limit A 1/2 (t) → 4/3 for all t = 0, 1, 2. The respective signal strength of Higgs production is where we follow the notations of signal strengths defined in [16]. Similarly, the partial decay width of the channel h 0 1 → gg is determined as where µ hgg = µ ggF . Because s h ǫ = O(ǫ 2 ) and the branching ratio of this decay is smaller than 9%, we will use the naive approximation µ ggF ≃ c 2 h to find a lower bound of |c h |. For all remaining decay channels of the SM-like Higgs boson into SM particles, the treelevel couplings are always different from the SM prediction the factor c h , therefore µ f = c 2 h for all main decays f = ff , W W * , ZZ * . The global signal strength defined in [16] can be formulated approximately as follows This gives the constraint 0.995 ≤ |c h | ≤ 1, and |s h | ≤ 0.10.
If we use the constraints of ∆ b,s and c β ′ given in [6], the values of |s h | satisfying (74) are reasonable for the approximation we have just discussed. In addition, the constraint of s h results in small couplings of h 0 1 with the heavy fermions and W ′ gauge boson, giving their suppressed contributions to the decay rate h 0 1 → γγ. Couplings h 0 1 h ± 1,2 h ± 1,2 depend on many unknown Higgs-self couplings, implying that charged Higgs masses are not constrained from the experimental data of the decay h 0 1 → γγ, so we will not consider this decay further.
C. Singly charged Higgs bosons with additional δ ±
In this section we consider the model including new singly charged Higgs bosons δ ± discussed in the neutral lepton sector. Apart from the second term in (25), the Higgs potential has new terms, The appearance of δ ± does not change the allowed regions of parameters discussed in [6].
Also, the results derived for Higgs bosons are unchanged, except the singly charged Higgs sector. In the basis (ϕ ± , ϕ ′± , Φ ± , δ ± ) T , the squared mass matrix is denoted as has only the following non-zero elements: This matrix is diagonalized by a transformation relating with a mixing angle ξ satisfying Then the total transformation can be found to be which changes the original basis into the mass eigenstate basis ( We note that the Goldstone bosons G ± 1,2 defined in (62) are not affected by the presence of δ ± .
The masses m 2 (62) and h ± 2 ≡ δ ± which is used for simple approximations because Eq. (76) means t 2ξ ∼ ǫ ≪ 1. The relevant couplings of the charged Higgs bosons to fermions are collected in Table IV. Couplings of the charged Higgs bosons with gauge bosons are shown in Table V. Only couplings of h ± 1 are shown because the couplings of h ± 2 can be derived by the following replacements: We consider here only the case of ξ → 0. Some important properties of h ± 1 are as follows. Differences between couplings of h ± 1 and the SM-like Higgs bosons to normal fermions are g h ± 1 ℓν ℓ /g h 0 Table V, the couplings of h ± 1 to the SM-like bosons, namely h ± 1 ZW and h ± 1 h 0 1 W , are extremely small because they contain factors s Z ǫ 2 ∼ O(ǫ 4 ), s Z ∼ O(ǫ 2 ), and another mixing smaller than 0.02. Other couplings to light fermions are also small because s ξ → 0. With c ζ , c ξ → 1, the main decays of h ± 1 into light particles are h + 1 → tb. If m h ± 1 > m W ′ , m Z ′ , there will appear two additional large decay modes h + 1 → Z ′ W + , ZW ′+ . In contrast to h ± 1 , the charged Higgs bosons h ± 2 only couple strongly with leptons and Higgs bosons. Therefore, the main decay modes are h + 2 → (ν e i ) c e j with e j = e, µ, τ . The main processes for h ± 2 production at colliders are We would like to compare the above singly charged Higgs bosons with the ones predicted by Zee models, where the charged Higgs sector was investigated thoroughly in Ref. [17]. The particles. But the predictions for charged Higgs boson production at colliders like the LHC are different, because of the appearance of new particles, such as new heavy quarks and Higgs bosons, and the constraints from allowed regions of parameters indicated in [6]. We will review these regions before discussing the signal of new particles at colliders.
Vertex
Coupling where g = 2m W v ≃ 0.651 is the SM gauge coupling, and ζ ′ satisfies t ξ W = c W t Z ≃ c 3 β ′ s β ′ ζ ′ ǫ 2 [6]. This gives the constraints We can see that the allowed values of ζ give very small values of t Z,ξ W , even with large ǫ < 1.
For simplicity, we will also use the following simple approximations: The simple texture of λ ℓ in (20) gives M L 1 = u M L 1 s β ′ and M L 2 = u M L 2 ρ µτ . For the quark sector, M Q 1 = u M Q 1 s β ′ and M Q 2 = u M Q 2 ρ sb . Now the masses of the heavy particles are
B. Searches for new fermions at colliders
From the above discussion, if the new fermions are lighter than all new bosons, including W ′± , Z ′ , h 0 2,3 , h ± 1,2 , they have only the following three-body decays: • For the first family of new fermions: • For the second family of new fermions: Because of the suppressed ∆ τ,b , the main decay modes are The partial decay widths of decays F → f h 0 1 , f W, f Z are where V = W, Z, g F f V and Y F f h 0 1 are, respectively, couplings of fermions with gauge and Higgs bosons given in Tables I, II and III. Additional factors 3 are included for quark decays.
The decays listed in (84) and (85) always have , with ζ ′ satisfying (77). Hence, every heavy fermion will decay mainly into a light fermion and a SM-like Higgs boson.
Heavy fermions have been being searched for at the LHC recently, for example the heavy lepton decays into pairs of light leptons and the SM-like gauge bosons [18], and the null result is consistent with this investigation. Other heavy quark decays listed in (85) are [20], and U 2 , D 2 → Zt, Zb [21]. But the promoting channels predicted from this discussion are only U 2 → ch 0 1 and D 2 → bh 0 1 .
In conclusion, we have indicated that the allowed regions of parameters given in [6] predict following main fermion decays: According to our knowledge, these decay channels were not treated experimentally. We emphasize that this discussion is valid for heavy fermions lighter than all other new bosons. Any fermions that are heavier than a heavy gauge boson or a Higgs boson will decay mainly into light fermions and this boson.
C. Searches for new Higgs bosons at colliders
At the LHC, the promoting possibility of detecting h 0 2 coupling strongly with heavy fermions was indicated in [6]. These large couplings are shown in Table VI, where only large couplings of neutral CP-even Higgs bosons are shown for investigating Higgs productions.
Vertex coupling We will focus on the remaining new Higgs bosons, including h 0 3 , h a , h ± 1 and h ± 2 . It turns out that they inherit many properties of the new Higgs bosons predicted in THDMs and the MSSM, except h ± 2 . The Higgs sector of the Zee models can be regarded as the one of a THDM plus a pair of singly charged Higgs bosons, as investigated thoroughly in Ref. [17].
And the complete investigation of the Higgs phenomenology of the MSSM was presented in [23] including a brief comparison with Higgs sector in THDMs. The Higgs bosons h 0 3 , h a , and h ± 1 have degenerate masses containing the factor of the trilinear Higgs self-coupling µ. This property is the same as that of the MSSM, but completely different from THDMs. Both Refs. [17] and [23] considered the Yukawa part of the THDM type II, where up and down right-handed singlets of light fermions couple with different Higgs doublets. In contrast in the model under consideration, all right-handed fermions couple with the same Higgs doublet φ. This explains why the couplings of the neutral h 0 3 and h ± 1 with all quarks always contain the same factor 1 t β , and couplings of h a with all SM-like fermions contain the same factor 1/ t β , as shown in Table VII. While, the couplings of up and down quarks in the MSSM and THDM type II have different factors of 1/t β and t β , respectively. The notation β in this
Vertex
Coupling work is equivalent to 1/t β defined in [17,23], where the allowed t β is consistent with the constraint (78).
Now the recent searches for Higgs bosons in THDMs and MSSM will be used for predictions of detecting new Higgs bosons discussed in this work. We consider only Higgs bosons heavier than the top quark. Possible main decays are where . Expressions for couplings Yukawa Y hf f , gauge-Higgs-Higgs g hV V , the Higgs-Higgs-gauge g hhV , and λ hhh were listed in the above Tables. The correlations between the different partial decay widths of a Higgs boson depend only on the last factors of formulas in (87). Hence, they will be used to estimate the largest partial decay widths.
The main decay channels of h ± 1 are h + 1 → tb, Z ′ W, ZW ′ have relative factors as follows: where the allowed values of t β are given in (78). Hence, if m h ± 1 is not too larger than the heavy gauge boson masses, the main decay is h + 1 → tb, where the h ± 1 tb coupling is the same as in the MSSM. The LHC has searched for this decay recently [24,25], through the production channel pp → tbh ± , giving the lower bound of 1 TeV for m h ± 1 .
VI. CONCLUSION
Recently, the G221 model has been introduced in Ref. [6] with the main purpose to explain all experimental data in flavor physics, tau decays, electroweak precision data, and LNU phenomenology from the anomalies in B decays. But there are still to crucial questions to this model, namely, how to generate active neutrino masses and DM? This work indicated that these problems can be solved based on the mechanisms of generating the active neutrino masses by radiative corrections. In particular, the simplest way to generate the active neutrino masses based on the Zee models was shown in detail. The model predicts the existence of a new pair of singly charged Higgs bosons that have large couplings only with light leptons and Higgs bosons. The DM problem can be solved by applying similar mechanisms shown in many radiative neutrino mass models with DM that were widely investigated previously.
In this work we have analyzed a more general diagonalization of gauge boson mass matrices. We have found that the ratio of the tangents of Z − Z ′ and W − W ′ mixing angles is the cosine of the Weinberg angle, cos θ W . This leads to the consequence that the number of the model parameters is reduced by 1. Hence, their behaviors can be predicted based on well-known studies of the THDM as well as of the MSSM.
We combined the above results and the allowed regions of parameters indicated in Ref. [6] to predict some promoting decay channels of new fermions and Higgs bosons. We found that the decays of new heavy particles to SM-like gauge bosons are very suppressed, due to the very small mixing of heavy and SM gauge bosons. The main decays of heavy fermions into two SM-like particles are the decays F 1,2 → h 0 1 f 1,2 . Decays into SM-like fermions in the third family are very suppressed because the allowed regions contain the tiny coefficient ∆ τ,b . The main decay of h ± 1 is h ± → tb. The latest searches for this decay channel give a 1 TeV lower bound for the charged Higgs boson mass.
The LHC have searched for many decay channels of new fermions into SM-like fermions of the third family. So the model will be checked by experiments in coming years. If these decay channels are detected, the model must be extended. For example, the third family of new vector-like fermions should be added to release the allowed regions of parameters. If λ ℓ has the form given in Eq. (20), the precise formula of V 11 L defined in (22) is where ρ µτ = 1 − c 2 β ′ ∆ 2 µ + ∆ 2 τ . Other submatrices contained in V L are After the block-diagonalization, the SM blocks of fermions matrices must satisfy the experimental constraints. In general, the SM block of the charged lepton mass matrix V e V L M E W † e = M ′ E will not be diagonal if the matrix y ℓ in (13) is assumed to be diagonal for simplicity. Instead of, y ℓ is chosen so that only mixing on µ − τ sector is non-zero, There exist values of y µτ,τ µ so that the matrix (A2) is diagonal and the result of [6] is unchanged. The diagonal SM block of charged leptons also guarantees that the lepton flavor violating decay h 0 1 → µτ is suppressed, consistent with experimental constraints. Then y µτ and y τ µ are chosen to satisfy the condition (M ℓ ) 23 = (M ℓ ) 32 = 0. Now the elements of the Yukawa coupling matrix y ℓ can be expressed as | 2017-05-25T04:32:46.000Z | 2016-11-21T00:00:00.000 | {
"year": 2016,
"sha1": "53e3f2510228a537ab9cf406e24eb742d633ccdf",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-4866-x.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "53e3f2510228a537ab9cf406e24eb742d633ccdf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13085487 | pes2o/s2orc | v3-fos-license | Observation of Faraday Waves in a Bose-Einstein Condensate
Faraday waves in a cigar-shaped Bose-Einstein condensate are created. It is shown that periodically modulating the transverse confinement, and thus the nonlinear interactions in the BEC, excites small amplitude longitudinal oscillations through a parametric resonance. It is also demonstrated that even without the presence of a continuous drive, an initial transverse breathing mode excitation of the condensate leads to spontaneous pattern formation in the longitudinal direction. Finally, the effects of strongly driving the transverse breathing mode with large amplitude are investigated. In this case, impact-oscillator behavior and intriguing nonlinear dynamics, including the gradual emergence of multiple longitudinal modes, are observed.
In 1831, Faraday studied the behavior of liquids that are contained in a vessel subjected to oscillatory vertical motion [1]. He found that fluids including alcohol, white of egg, ink and milk produce regular striations on their surface. These striations oscillate at half the driving frequency and are termed Faraday waves. They are considered to be an important discovery. Since then, the more general topic of pattern formation in driven systems has been met with great interest, and patterns have been observed in hydrodynamic systems, nonlinear optics, oscillatory chemical reactions, and biological media [2].
In this paper we study pattern formation by modulating the nonlinearity in a Bose-Einstein condensate (BEC). Nonlinear dynamics arise from the interatomic interactions in this ultracold gas. In the past the observation of interesting phenomena has motivated researchers to propose and implement various techniques to manipulate the nonlinearity. Such control has been accomplished for example by exploiting Feshbach resonances [3]. In our experiment we investigate an alternative technique, namely periodically modulating the nonlinearity by changing the radial confinement of an elongated, cigar-shaped BEC held in a magnetic trap. The radial modulation leads to a periodic change of the density of the cloud in time, which is equivalent to a change of the nonlinear interactions and the speed of sound. This can, in turn, lead to the parametric excitation of longitudinal sound-like waves in the direction of weak confinement. This process is analogous to Faraday's experiment where the vertical motion of the vessel produced patterns that were laterally spread out.
It has been shown theoretically that for a BEC, a Faraday type modulation scheme in the case of small driving frequencies leads one to the same type of analysis as would the direct modulation of the interatomic interaction, e.g., by a Feshbach resonance [4,5]. In both cases, the dynamics are governed by a Mathieu equation that is typical for parametrically driven systems. Floquet analysis reveals that a series of resonances exist, consisting of a main resonance at half the driving frequency, and higher resonance tongues at integer multiples of half the driving frequency [4].
In our experiment we exploit this transverse modulation scheme for three different applications. First, we apply a relatively weak continuous modulation, demonstrate the emergence of longitudinal Faraday waves, and study their behavior as a function of the excitation frequency. Second, we investigate longitudinal patterns that emerge as a consequence of an initial transverse breathing mode excitation without the presence of a continuous drive. This has important consequences in the context of damped BEC oscillations and has been studied theoretically in [6]. Since the first experiments with BECs, the study of collective excitations has been a central theme [7,8]. The transverse breathing mode, which we exploit in our experiments, plays a prominent role: Chevy et al. [9] showed that this mode exhibits unusual properties, namely an extremely high quality factor and a frequency nearly independent of temperature. Finally, in a third set of measurements we study the situation of a relatively strong modulation, resonantly driving the transverse breathing mode. We show that the condensate responds as an impact oscillator, which leads to intriguing multimode dynamics.
The experiments were carried out in a newly constructed BEC machine that produces cigar-shaped condensates of 87 Rb atoms in the |F = 1, m F = −1 state. The typical atom number in the BEC is 5 · 10 5 , and the atoms are evaporatively cooled until no thermal cloud surrounding the condensate is visible any more. The atoms are held in a cylindrically symmetric Ioffe-Pritchard type magnetic trap with the harmonic trapping frequencies of {ω xy /(2π), ω z /(2π)} = {160.5, 7}Hz. The weakly confined z-direction is oriented horizontally.
For the experiments described below, the following collective mode frequencies are of particular importance: first, there exists a high-frequency transverse breathing mode. For our trap geometry, this mode has a frequency of ω ⊥ /(2π) = 321 Hz [10,11], very close to the limit of vanishing axial confinement 2 · ω xy /(2π). The second set of modes in which we are interested here consists of axial modes which, for large quantum numbers, correspond to sound waves in the z-direction. The frequencies of this discrete set of modes can be approximately calculated as given in [12,13].
In order to investigate the parametric driving process mentioned above, we first performed a set of "spectroscopy" experiments in which we continuously modulated the transverse trapping confinement at a fixed modulation frequency and observed the subsequent emergence of longitudinal Faraday waves. For each excitation frequency the modulation amplitude was adjusted such that the longitudinal patterns emerged typically at some point after 10 to 30 oscillations. On the breathing mode resonance, a trap modulation of 3.6% was used, while at many other frequencies trap modulations of up to 42.5% were chosen to obtain clearly visible patterns [14]. This range of modulation depths is similar to the range used in numerical simulations in [5]. Representative examples for the resulting patterns are shown in Fig. 1. All experimental images in this manuscript were taken by destructive in-trap imaging.
The average spacing of adjacent maxima in the resulting pattern is plotted against the driving frequency in Fig. 2. The data lie on a clear curve, with the exception of the points near a driving frequency of 160.5 Hz, corresponding to the transverse dipole mode resonance (i.e. transverse slosh motion) [10]. However, inspection of our experimental images reveals that, at this frequency, we also excite the transverse breathing mode at 321 Hz. Excitation of the breathing mode is a very effective way of creating longitudinal patterns. Therefore the patterns obtained at 160.5 Hz are actually the same as those produced at 321 Hz. In order to rationalize the data, we first note that parametric excitation with a certain driving frequency excites oscillations predominantly at half the driving frequency, the main resonance also observed in Faraday's experiments. The dispersion relation of longitudinal collective modes that become sound-like for high quantum numbers is given in [12,13] and is used to calculate the expected spacing between density maxima. The resulting spacings are plotted as the step-like curve in Fig. 2 and are in excellent agreement with our experimental data, corroborating the assumption and theory of a parametric driving process.
In a second set of experiments we show that the longitudinal modes are driven by the transverse breathing motion even without the presence of a continuous external drive. In particular, this disproves the influence of any tiny residual axial trap modulation that has existed during the presence of our continuous drive due to experimental imperfections [14]. For this, we excited the transverse breathing mode at 321 Hz by driving the transverse trap confinement for a few cycles, and then let the condensate evolve without the presence of the drive. For a gentle excitation, we do not see longitudinal patterns immediately after the end of the drive, but can observe them emerge at later times. A weaker excitation delays the emergence of longitudinal modes out to later times, while in the case of strong excitations the patterns can emerge within the first three cycles. In order to follow the evolution, we quantify the presence of longitudinal patterns as described in [15]. Fig. 3 shows the evolution after the end of a moderate excitation. For this data, we excited the condensate for only two cycles, varying the transverse trap frequency by 9% with a modulation frequency of 321 Hz during the modulation. Immediately after the excitation, the obtained images showed no longitudinal waves, and our pattern visibility measure, plotted in the figure, is initially picking up high frequency noise along the longitudinal axis. A main contribution to this noise is the imaging noise of our detection system. Weak pattern formation is observed starting at five periods after the end of the modulation, and strong longitudinal patterns then appear after about nine periods. A similar behavior is known for example from parametric amplifiers in optics: if no input signal amplitude is present, a signal emerging from noise (or zero point energy) can form if the amplification is large enough. This behavior is also found by our numerical simulations based on the Gross-Pitaevskii equation. In simulations with similar parameters as in the experiment, no pattern formation is observed for several breathing mode periods. From the onset of pattern formation, it takes just three periods for patterns to grow to their full strength. We find that the onset time of pattern formation in the simulations is earlier when more noise is added to the initial relaxed wave function. This experiment is closely related to the theoretical situation described in [6], where a single and sudden jump in the transverse trap frequency was used to excite the breathing mode, instead of a sinusoidal trap frequency modulation. We have used a trap jumping excitation in the case of very elongated BECs to demonstrate a second observation about the onset of longitudinal patterns, namely the fact that these patterns can start to emerge in spatially localized domains, rather than uniformly across the whole cloud. To show this effect, we produced condensates in a very elongated cigar-shaped trap with trapping frequencies of {ω xy /(2π), ω z /(2π)} = {286.1, 2.8}Hz. We temporarily jumped to a different trap of {ω xy /(2π), ω z /(2π)} = {88.4, 5.1}Hz for the duration of 1.3 ms, then jumped back to the first trap and let the cloud evolve. After about 10 ms, we observed BECs in which a perfectly periodic density modulation was stretching over almost the entire cloud; but it is also not uncommon to observe patterns in several separate domains, as shown in Fig. 4. Considering that the speed of sound in the elongated cloud is only about 2 mm/s [13,16], it is plausible that over the time scale of this experiment different regions of the BEC can get independently excited and evolve into longitudinal patterns independently from each other.
In the experiments described so far, our parametric excitation has led to longitudinal modes oscillating at half the frequency of the parametric driving frequency. In the case of strong parametric amplification, it is theoretically expected that modes at other frequencies (higher resonance tongues), in particular modes at the driving frequency, can be excited, too [4]. This motivated our third set of experiments in which we started again with a condensate in a trap with trap frequencies of {ω xy /(2π), ω z /(2π)} = {160.5, 7}Hz. We then continuously modulated our transverse trap frequency by about 19% with a modulation frequency of 321 Hz. The resulting breathing motion is seen in Fig. 5 where we plot the Thomas-Fermi radius of the cloud in the transverse direction versus time. The graph clearly shows that the cloud, upon strong excitation, gradually starts behaving as an impact oscillator, i.e., as an oscillator bouncing off a stiff wall during each period. A classical impact oscillator, realized for example by a ball bouncing off a stiff surface, is a paradigm for nonlinear dynamics, nonlinear resonances and chaotic behavior. In the present case, the role of the stiff wall was played by the strong mean-field repulsion during the slim phase of the oscillations when the BEC is strongly compressed in the transverse direction. Theoret- ically, the instability of the breathing mode upon strong driving and the impact oscillator behavior have been analyzed in [17]. The behavior of the cloud radius in the transverse direction was also reproduced in our numerical simulation of the azimuthally symmetric 3D Gross-Pitaevskii equation, displayed as the solid line in Fig. 5. However, the numerics, when starting with a thoroughly relaxed wave function in the initial trap, show a sign of longitudinal pattern formation only after 18 ms, while in the experiment, longitudinal patterns clearly formed already during the third period (9 ms). This, again, hints at the importance of initial noise in the condensate that seeds the parametric amplification. In the experiment, the patterns start out similar to those displayed in Fig. 1 for the case of a weak drive at 321 Hz. But, upon the action of the strong drive, they quickly evolve into more complicated patterns, involving the excitation of several other modes. The inset of Fig. 5 shows an image taken after 5.2 driving periods (a), together with the Fourier transform (b). The Fourier spectrum reveals that several modes corresponding to the first resonance tongue of longitudinal modes with nearly half the driving frequency are excited. In addition, modes at twice the distance from the central Fourier peak are visible. Those modes belong to the second resonance tongue of the main resonance.
In conclusion, we have experimentally observed the effects of parametric resonances in a BEC. The observed resonances lead to Faraday waves along the long BEC axis. These results advance the understanding of collective mode behavior in a condensate, which is one of the key tools to study BEC dynamics. In addition, we have shown that strongly driving the transverse breathing mode leads to an instability whereupon the mode amplitude increases exponentially, accompanied by the strong excitation of multiple sound-like modes. | 2018-04-03T00:35:35.433Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "9005c46934d8ec2e1da60075027f5d4eba65b3ff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0701028",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d63c01e0ffd8cb9dc29b595490832a5aa2790a95",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
78547075 | pes2o/s2orc | v3-fos-license | Cost variation analysis of antipsychotic drugs available in Indian market: an economic perspective
Pharmacoeconomics plays a very important role in a developing country like India where the cost of drug therapy is one of the major obstacles for effective treatment of a disease 5 .Increasing health care cost causes an economic burden to the patients. 5 In fact, many studies have indicated that drug prices play a significant role in therapeutic compliance. 1 To ensure that the essential drugs are available at an affordable price, the Government of India exercises control over the prices through an order called DPCO (Drug Price Control Order). 6 National Pharmaceutical Pricing Authority (NPPA) implements DPCO. 6,7 Previously only 74 drugs were under price control in 1995. 6,8 However under the provision of DPCO 2013, currently 348 drugs are controlled by NPPA. 6 However only 3 antipsychotic drugs namely Chlorpromazine (25 mg, 50mg, 100mg tablets, 25mg/5ml syrup and 25mg/ml injection), Haloperidol (5mg/ml injection) and olanzapine (5mg,10mg tablets) are under price control. 7 ABSTRACT
INTRODUCTION
Pharmaceutical industry has grown at a tremendous pace in India since last few decades. 1 The Indian market is flooded with enormous number of branded generic drugs both from the domestic and foreign manufacturers. 2,3 There are more than 1 lakh formulations available for all the category of drugs under various brand names and there is no system of registration of these medicines. 3,4 This has led to a wide price variation in the cost of different brands of the same formulations. 2 Pharmacoeconomics plays a very important role in a developing country like India where the cost of drug therapy is one of the major obstacles for effective treatment of a disease 5 .Increasing health care cost causes an economic burden to the patients. 5 In fact, many studies have indicated that drug prices play a significant role in therapeutic compliance. 1 To ensure that the essential drugs are available at an affordable price, the Government of India exercises control over the prices through an order called DPCO (Drug Price Control Order). 6 National Pharmaceutical Pricing Authority (NPPA) implements DPCO. 6,7 Previously only 74 drugs were under price control in 1995. 6,8 However under the provision of DPCO 2013, currently 348 drugs are controlled by NPPA. 6 However only 3 antipsychotic drugs namely Chlorpromazine (25 mg, 50mg, 100mg tablets, 25mg/5ml syrup and 25mg/ml injection), Haloperidol (5mg/ml injection) and olanzapine (5mg,10mg tablets) are under price control. 7 Antipsychotic drugs have been prescribed with an increasing frequency for a variety of psychiatric disorders. 9 This ranges from acute psychosis, manic and psychotic depressive disorders as well as in chronic conditions like schizophrenia, schizoaffective disorders and delusional disorders. 10 Most of these are chronic conditions requiring lifelong treatment. 11 This is responsible for higher medication cost and thus, cost related poor medication adherence is related to adverse health outcomes. 12 Regarding antipsychotic drugs, with the best of our knowledge hardly any studies are available which compares the cost of different brands available in Indian market. The current study projects a representative view of the existing situation of the cost of various antipsychotic drugs available in Indian market.
Aim of the study was to evaluate the cost of oral and parenteral antipsychotic drugs of different brands currently available in the Indian market.
To analyse the difference in cost of different brands
for the same dosage of same active drug by calculating percentage variation of cost and cost ratio. 2. To compare the percentage price variation and cost ratio of single drug therapy of oral and parenteral antipsychotic agents across the different brands available in the Indian market.
1.
A list of all oral and parenteral antipsychotic drugs available in the Indian market was found out from http://www.medguideindia.com. 2. Cost of same drug in same strength, dosage form manufactured by different companies was found out from the same website. 3. The difference in the maximum and minimum price of the same drug formulation manufactured by different pharmaceutical companies was noted. 4. The percentage variation in price was calculated using the formula: Price of most expensive brand-Price of least expensive brand × 100 Price of least expensive brand 5. Drugs were classified into 5 categories depending on the percentage range of price variation. These were as follows: <24.99% 25-49.99% 50-99.99% 100-499.99% >500 % 6. Cost ratio, the ratio of the cost of the costliest brand to cheapest brand of the same formulation was calculated. This tells us how many times the costliest brand cost more than the cheapest brand in each formulation 7. The drugs being manufactured by only one company and combinations of antipsychotic drugs were excluded.
Statistical analysis
Findings of our study were expressed as absolute numbers as well as percentage.
RESULTS
The prices of a total of 16 antipsychotic drugs available in the Indian market in 72 different formulations were analysed. These formulations are manufactured by different pharmaceutical companies. Table 1 shows the price variation between typical antipsychotic drugs.
In this group, Tab Haloperidol 0.25mg shows maximum price variation of 650% and a cost ratio of 7.5 followed by Tab Trifluoperazine 1mg having a price variation of 555.5 % and a cost ratio of 6.55.
Inj.Haloperidol 50mg (1ml) shows minimum price variation of 3.5 % with a cost ratio of 1.03.
Highest numbers of brands among the typical group are for Tab.Haloperidol 5mg (40) followed by Tab Haloperidol 10mg (32). Table 2 shows the price variation among the atypical group of antipsychotic drugs.
In this group, tab Risperidone 3mg shows a price variation of 2282.35% with a cost ratio of 23.82, indicating that the costliest brand is about 23 times more expensive than the cheapest available brand. Next highest is Tab Risperidone 4mg with a price variation of 1976.92% and a cost ratio of 20. 76 .Price variations is maximum in Risperidone group.
Tab Loxapine shows the lowest price variation of 2.63% and a cost ratio of 1.02.
Highest number of brands was seen for Tab Olanzapine 5mg (94).
DISCUSSION
This study was carried out with the main objective of computing the cost and percentage price variation among the different antipsychotic drugs across the different brands available in the Indian market. Our study findings showed a high fluctuation in the minimum and maximum price of antipsychotic drugs which are being manufactured by several companies.
The cost of many of the antipsychotic formulations has a percentage price variation above 100% reaching a maximum of 2282.85%. This is in accordance with similiar studies done for antidiabetic, antiepileptic and antihypertensive drugs. [1][2][3][4]6,[12][13][14] Possible reasons for this could be due to companies offering incentives to the physician for prescribing a particular brand. 3 Also at times, pharmacist does not dispense the same brand as prescribed by the doctor and substitute it with some other brand, quoting the reason as non-availability. 15 This is done for the economic gains as some brands have a higher profit margin. 15 Inadequate government regulation and pricing policies, raw material cost, promotion and distribution cost, existing market structure, asymmetry of information etc. could also be the possible reasons. [1][2][3][13][14][15] In India, most of the patients are paying the medical bills out of their own pockets and are not covered by insurance schemes like developed countries. 1,13 Hence, there is a urgent need to control the price variation among different brands available in the market. 1 Hence, the concerned authorities should frame policies for regulating the drug prices. 13 Also the prescribers should be sensitized regarding the cost of different drugs and a manual of comparative drug prices of different generic and brand drugs should be provided. 13 Also the doctors should be encouraged to write the generic names of the drug and a cheaper brand should be prescribed whenever possible because the superiority of one particular brand over the others has not been scientifically proved. 13 These steps can help in providing cost effective therapy to the patients thereby improving the compliance.
At present, only a few drugs are under the Drug Price Control Order. 12,13 For the overall betterment of the healthcare of our country, it is desired that government should try and get all the drugs under the DPCO. 13 Lastly, Pharmacoeconomics should be an integral part of undergraduate and post graduate medical education in order to create awareness about the impact of cost on the treatment of the disease. 12
CONCLUSION
This study highlights the enormous price variation among different antipsychotic drugs in the Indian Market. Hence it is recommended that necessary measures be taken to maximize the benefits of therapy and minimize the negative economic and personal consequences. | 2019-03-16T13:12:42.896Z | 2017-02-24T00:00:00.000 | {
"year": 2017,
"sha1": "6bcc787ca199e27c95bf68afe4a61a630a087f6d",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/1495/1321",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3899510aacd1e72f565913045d9c50106bdec00f",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3888827 | pes2o/s2orc | v3-fos-license | The Hematologic Toxicity of Methotrexate in Patients with Autoimmune Disorders
The incidence of auto-immune diseases is increasing nowa-days. Despite of the development of diagnosis and management of diseases, they remained chronic diseases. The patient’s lifespan expansion requires long-term treatment with harmful agents, such as Methotrexate or other immuno-suppressive drugs. The Methotrexate toxicities are based on the duration and cumulative dosing of drug, and the combination with other drugs. Myelosuppression and consequent pancytopenia is the most frequent hematologic toxicity, which occur mostly later during low dose methotrexate administration. We demonstrate three cases of low dose Methotrexate toxicity in older patients with rheumatoid arthritis and psoriasis. All patients were treated with low dose Methotrexate along more than one year continuously. Two old patients with RA and another with psoriasis developed pancytopenia causing severe neutropenia, cutaneous bleeding, and bruising and septic condition. They required intravenous antibiotic therapy, corticosteroids and limited transfusion dependence as a result of low dose Methotrexate. We have assessed the possible causes of Methotrexate toxicities and found that all patients used non-steroid anti-inflammatory drugs because of pain and protonpump inhibitor to avoid development of peptic ulcer. Two patients recovered, another died in septic condition. We would like to drawn attention of hematologists, dermatologists and rheumatologists to the harmful effect of low dose methotrexate in this patient population and emphasize the role of rigorous and consequent hematologic testing to avoid these severe late complications.
Two old patients with RA and another with psoriasis developed pancytopenia causing severe neutropenia, cutaneous bleeding, and bruising and septic condition. They required intravenous antibiotic therapy, corticosteroids and limited transfusion dependence as a result of low dose Methotrexate.
We have assessed the possible causes of Methotrexate toxicities and found that all patients used non-steroid anti-inflammatory drugs because of pain and protonpump inhibitor to avoid development of peptic ulcer. Two patients recovered, another died in septic condition.
Introduction
The incidence of autoimmune diseases increased to the end of twenty centuries. Despite of the development of diagnosis and therapy in medicine, autoimmune diseases remained incurable. Only long-term treatment with harmful agents is available for these patients, such as methotrexate or other immuno-suppressive drugs.
Methotrexate, a synthetic antifolate was developed in the 1950's after the discovery that dietary folic acid deficiency resulted in decreased blast cell count in leukemic patients.
This drug has been extensively investigated and used successfully to treat solid tumours as well as hematological malignancies. It has been used, in much lower doses, in autoimmune diseases including rheumatoid arthritis, psoriasis, lupus, sarcoidosis, and eczema [1].
By inhibiting several enzymes of the folic acid pathway MTX blocks purine and pyrimidine biosynthesis, leading to impaired DNA replication and cell proliferation. Tissues with high cellular turnover are thus the most sensitive to the cytotoxic impact of Methotrexate, which is responsible for its effectiveness as a chemotherapeutic agent, but also for many of its side effects such as mucositis, hair loss and cytopenias.
Myelosuppression
and pancytopenia including thrombocytopenia, megaloblastic anaemia, and leukopenia are the most frequent hematological toxicity, which occur later during low dose Methotrexate treatment. The hematologic toxicity is a serious and potentially life-threatening, still often underestimated complication of low dose MTX therapy. The pancytopenia often causes neutropenic sepsis. Low dose Methotrexate therapy can become dangerous, in particular with the elderly, who are at a greater risk for significant myelosuppression.
The prevalence of hematological toxicity is estimated to be 2% to 4% of all treated cases [2]. It may be much higher in the presence of predisposing factors: folic acid deficiency, hypoalbuminaemia, renal impairment and interaction with other drugs.
MTX induced pancytopenia can either occur in the early stage of treatment, possibly as a result of an idiosyncratic reaction [2]. The methotrexate late toxicities are based on the duration and cumulative dosing of drug, and probably the combination with other drugs.
In this paper, we present three cases with pancytopenia and neutropenic sepsis. The differential diagnosis, the treatment and possible drug interactions are also discussed.
Patient's characteristics
The main characteristics of the patients are listed in Table 1.
All patients were treated with low dose MTX in another Hospital and were referred to our department because of pancytopenia. All patients had febrile neutropenia and septic condition.
The patients and relatives were asked particularly for anamnestic data. All patients were treated along more than one year with low dose MTX because of autoimmune disease. Peripheral blood smear of patients were examined and bone marrow aspiration were made to exclude any underlying hematological malignancy. No bone marrow infiltration or increased blast ratio were observed.
The diagnosis of MTX caused pancytopenia was based in all cases on the exclusion of previous conditions.
Case 1
Fifty nine-year-old female with a medical history of hypertension, asthma and rheumatoid arthritis had been taking Methotrexate for 2 years in a weekly dose of 15 mg. She had been taking theophylline for her asthma in a daily dose of 300 mg and daily 550 mg Naproxen for her painful arthritis. She had 38°C fever two times the week before admission, and physical examination revealed oral mycositis with bruises all over her skin. Her vital signs were stable and her laboratory results showed pancytopenia (WBC: 2.09 g/l, neutrophils: 1.28 g/l) hemoglobin 63 g/l, haematocrit 20%, platelets 12 g/l), low serum protein level (51 g/l), high CRP (184.7 mg/l), with normal renal functions. Peripheral blood smear and bone marrow aspiration analysis ruled out haematological malignancies which underlied the suspicion that MTX induced myelosupression was responsible for our findings.
We started empirical antibiotic and anti-mycotic therapy together with Ca-folinate and parenteral corticosteroid. She also received red blood cell and platelet infusions.
Case 2
Eighty three-year-old female with a medical history of hypertension, ischaemic heart disease and rheumatoid arthritis had been taking Methotrexate for 16 months in a weekly dose of 16 mg subcutaneous injection (0.3 ml s.c./ week). Patient had been taking aspirin for ischaemic heart disease in a daily dose of 100 mg and pantoprazole for peptic ulcer prophylaxis in a daily dose of 40 mg. She had 39.5°C fever before admission, and became confused in her home. Physical examination revealed tachycardia, hypotonia and tachypnoea and urinary bleeding. Her vital signs were instable and she was admitted in septic condition to our department. Her laboratory results showed pancytopenia (WBC: 1.02 g/l, neutrophils: 0.4 g/l) hemoglobin 83 g/l, haematocrit 27%, platelets 5 g/l), low serum albumin level (32 g/l), high procalcitonin level (13 µg/l), with decreased renal functions (Creatinine 189 µmol/L, GFR: 18 ml/min). Peripheral blood smear and bone marrow aspiration analysis excluded haematological malignancies. MTX induced myelosupression was diagnosed. Empirical combined antibiotics and infusion were introduced with Cafolinate and parenteral corticosteroid. She also received platelet transfusions. Hemoculture was positive, Enterococcus fecalis developed. Despite of the accurate therapy, patient died after 2 days because of multiple organ failure.
Case 3
Seventy eight-year-old female with hypertension, diabetes, erosive gastritis and psoriasis had received Methotrexate 15 mg/week for 19 months when she was admitted to our department with febrile neutropenia. She had been taking pantoprazole for erosive gastritis in a daily dose of 40 mg and gliclazide for diabetes mellitus in a daily dose of 60 mg. On admission she had oral mucositis, her laboratory results showed severe myelo-supression (WBC: 0.12 g/l, neutrophil: 0.02 g/l, hemoglobin 81g/l, haematocrit 22.4%, platelets 23 g/l), low total protein (45 g/l) and albumin (23 g/l) levels, elevated renal function (creatinine 114 µmol/l) and high CRP (296.9 mg/l). She was immediately started on a combination of parenteral antibiotics and local antimycotics, recieved corticosteroids, Ca-folinate, red blood cell and platelet transfusions. On the 8 th day of her treatment her cell counts started to normalize (WBC: 10.34 g/l, hemoglobin 102 g/l, platelets 44 g/l).
Discussion
Methotrexate is the commonest disease-modifying antiinflammatory drug used in monotherapy or in combination with other drugs and biological agents the treatment of many autoimmune disorders. MTX, a highly selective competitive inhibitor of the enzyme dihydrofolate reductase. It consequently reduces the production of thymidylate and purine biosynthesis. DNA synthesis eventually halts and cells can no longer replicate [3].
The folate antagonist MTX also works on the adenosine pathway with important anti-inflammatory effects. Inhibition of transformylase by MTX leads to accumulation of 5aminoimidazole-4-carboxamide ribonucleotide and ultimately leads to increased levels of adenosine. Adenosine is a potent inhibitor of inflammation and induces vasodilation. This effect of MTX does not seem to be affected by folate supplementation [4]. The intracellular polyglutamination of MTX prolongs its intracellular presence, which contributes to their toxicity [5,6]. Adverse side effects can be quickly progressive and fatal. The main side effects include myelosuppression, hepatotoxicity, pneumonitis and renal toxicity.
Hematologic toxicity
The prevalence of hematologic toxicity, including leukopenia, thrombocytopenia, megaloblastic anaemia and pancytopenia, is estimated to be 2% to 4% [2]. The frequency of pancytopenia may increase if other drugs, such as nonsteroidal anti-inflammatory drugs [NSAID], proton pump inhibitor [PPI] and antidiabetics are co-administered. The prevalence of pancytopenia may be increased in case of folic acid deficiency, hypoalbuminemia, concomitant infections, advanced age, dehydration and renal impairment [7].
Pathogenesis of MTX inducing pancytopenia is still unknown. Pancytopenia may be acute or chronic and thought to be an allergy-like reaction [8]. MTX and folates complete cellular uptake, cellular storage as polyglutamates and binding to enzymes, because of their structural similarity. Depleted intracellular folate levels have been documented in peripheral blood lymphocytes of RA patients treated with MTX [9]. Delayed drug clearance had been observed in the elderly. This is caused by prolonged enterohepatic circulation, which is responsive to higher risk of pancytopenia.Leukopenia occurs from one to three weeks and marrow recovery is generally observed within approximately 3 weeks [2, [10][11][12][13].
Discontinuation of MTX represents the basis of therapy but the use of G-CSF and methylprednisolone are also beneficial [14].
Hepatotoxicity
Hepatotoxicity is a common complication of long term treatment with MTX [15,16], especially in the case of obesity, alcoholism, diabetes, non-alcoholic steatohepatitis and hepatitis B or C infection [17,18]. Elevated aminotransferases level is the most common laboratory sign of hepatotoxicity. It was observed in MTX-treated rheumatoid arthritis and psoriatic arthritis patients with a frequency varying from 7.5% to 26% [16]. The hepatotoxic side effect of the MTX is unclear. Folic acid supplementation is associated with a reduced incidence of aminotransferases elevation in inflammatory disease treated by MTX [19].
Pulmonary toxicity
Pneumonitis is one of the most serious but infrequent side effects of chronically used MTX. Its prevalence seems about 0.9% to 1% [20]. It has been thought, that a hypersensitivity reaction to MTX mediated by activated T-cells plays a role in the mechanism of pneumonitis [21]. In fact, MTX leads to a cytokines release by type 2 alveolar cells causing an alveolitis by recruitment of inflammatory cells [22].
MTX can also stimulate lung fibroblasts and epithelial cells to induce recruitment of eosinophil's [23]. It has been also demonstrated that neutrophils are implicated in the pathogenesis of lung fibrosis [24]. Cough, dyspnoea may occur from few days to more than a year after the beginning of MTX therapy and also several weeks after MTX discontinuation [25].
Renal toxicity
Renal toxicity occurs mostly in higher doses of MTX, especially intravenous administration of this drug. MTX and its metabolites are relatively insoluble in acid urine [26]. An increase in the urine pH results on a greater solubility of the drug and its metabolites. For that reason, it is recommended to monitor renal function time to time in order to control renal side effect especially in case of higher dose administration of MTX. However the serum creatinine may be a misleading measure of renal function in older patients because of an overall reduction in lean muscle mass. Urine alkalinisation and leucoverin rescue are the cornerstones of the management of the earlier signs of renal dysfunction [15,27].
Factors that contribute to the toxicity
Advanced age is a significant predictor for MTX toxicity [28]. The pharmacokinetic profile in elderly patients, change drug distribution. The decreased lean body mass and end-organ blood flow, and decreased hepatic drug metabolism, or decreased renal drug excretion lead to toxicity. MTX side effect can be increased in renal impairment or reduced renal blood flow, as with NSAIDs use [29]. Therefore MTX is contraindicated in any patient with eGFR <30 mL/min [28].
According to pharmacokinetic studies, about the 50% of MTX is bounded to serum albumin in the circulation (42% to 57%), whereas their metabolite (7-hydroxymethotrexate) is extensively (91% to 93%) bound. Significant interindividual variations in the activity of binding proteins lead to the efficacy and potential toxicity of MTX. The free portion of MTX determines the influx of MTX into cells and its rate of clearance by the kidneys. Hypoalbuminaemia results in increased levels of free MTX because MTX binding to serum albumin is proportional to the amount of albumin, resulting in increased risk of myelotoxicity. Hypoalbuminaemia in RA may be due to increased albumin turnover, presumably caused by high consumption of albumin at sites of inflammation and poor nutritional status. Occult chronic liver disease and advanced age may also be reflected in low serum albumin [29]. Poor nutritional status has been associated with increased risk of MTX toxicity.
Prevention and management of MTX toxicity
There are some general aspects of MTX administration and post-treatment management. To avoid MTX side effects, it is advised that routine blood count be performed every four to eight weeks [1] and it is mandatory to determine renal function time-to time. When renal function impaired, MTX dose adjustment is necessary. When creatinine clearance [CrCl] is between 30 and 60 ml/min, dose of MTX is reduced by 50% and when CrCl is under 30 ml/min, dose of MTX should be reduced by 75% [30]. Folic acid treatment [1 to 3 mg/day] after 2-3 days of MTX administration decreases the frequency of toxicities, such as mucositis, hematologic abnormalities and liver enzyme elevations, without seeming to interfere with clinical efficacy [31].
The most complicated prevention practice is the avoiding drug interactions. Most elderly patient take many drugs which having the potential to displace MTX from serum proteins and/or to reduce MTX clearance. The most known are interaction with trimethoprim and sulfamethoxazole [TMP-SMX] and NSAIDs [30,32,33].
Monitoring plasma MTX level is an essential part of high dose MTX therapy, but it is not necessary in low dose MTX treatment.
Our three cases demonstrated that the low dose MTX can cause easily life threating complications. Monitoring for hematologic toxicity should be done every 4 to 8 weeks by primary care physicians but our patients had no blood sampling for 4 months. All patients had septic condition due to granulocytopenia and despite of adequate treatment one patient died in multiple organ failure in our department. Age is also a determinant of whether patients will survive pancytopenia and its associated complications, such as sepsis, as the age of our dead patient was 83 years whereas the age of those who recovered from severe sepsis was lower.
We would like to draw attention to toxic effect of low dose Methotrexate of hematologists dermatologists and rheumatologists and primary care physicians.
Conclusion
Despite possible side effects of weekly administered low dose MTX used in autoimmune diseases, MTX is very well tolerated and its efficacy is excellent. When monitored correctly, the side effects can be avoided. It is very important that primary care physicians, the hematologists aware of these complications and recommendations, because the majority of these serious complications can be detected on time and even prevented. Patients on MTX therapy should be regularly monitored with renal and liver function and hematology tests to identify myelosuppression and avoid the sequela of pancytopenia.
More attention should be paid to patients' nutritional status especially the serum albumin level before commencing MTX. Folic acid supplementation should be considered in all patients taking MTX.
In our experience, MTX-induced pancytopenia is more common than expected and is probably under-reported. We recommend vigilance for this late and potentially fatal complication of MTX therapy. | 2019-03-17T13:04:15.712Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "55ed2b8ee35e3336b7e8292473e5e4a6720d226c",
"oa_license": "CCBY",
"oa_url": "http://neoplasm.imedpub.com/the-hematologic-toxicity-of-methotrexate-in-patients-with-autoimmune-disorders.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ab79634a2bcc69ca68a7f41524c676026bd206a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4666624 | pes2o/s2orc | v3-fos-license | The Adapted Fresno test for speech pathologists, social workers, and dieticians/nutritionists: validation and reliability testing
Purpose The current versions of the Adapted Fresno test (AFT) are limited to physiotherapists and occupational therapists, and new scenarios and scoring rubrics are required for other allied health disciplines. The aim of this study was to examine the validity, reliability, and internal consistency of the AFT developed for speech pathologists (SPs), social workers (SWs), and dieticians/nutritionists (DNs). Materials and methods An expert panel from each discipline was formed to content-validate the AFT. A draft instrument, including clinical scenarios, questionnaire, and scoring rubric, was developed. The new versions were completed by ten SPs, 16 SWs, and 12 DNs, and scored by four raters. Interrater reliability was calculated using intraclass correlation coefficients (2,1) for the individual AFT items and the total score. The internal consistency of the AFT was examined using Cronbach’s α. Results Two new clinical scenarios and a revised scoring rubric were developed for each discipline. The reliability among raters was excellent for questions 1, 3, and 6 across all disciplines. Question 7 showed excellent reliability for SPs, but not for SWs and DNs. All other reliability coefficients increased to moderate or excellent levels following training. Cronbach’s α was 0.71 for SPs, 0.68 for SWs, and 0.74 for DNs, indicating that internal consistency was acceptable for all disciplines. Conclusion There is preliminary evidence to show that AFT is a valid and reliable tool for the assessment of evidence-based practice knowledge and skills of SPs, SWs, and DNs. Further research is required to establish its sensitivity to detect change in knowledge and skills following an educational program.
Introduction
The importance of evidence-based practice (EBP) in allied health is well documented in the literature. 1,2 Clinical decisions that are based on patients' unique circumstances, sound clinical expertise, and the best available research evidence are known to deliver the best outcomes for patients and their families. [3][4][5] Allied health practitioners hold positive attitudes toward EBP and believe in the value of research evidence in informing their clinical decisions. However, applying research findings to clinical decisions is not a simple process and is often difficult to achieve. One of the most commonly reported barriers to evidence uptake in allied health is the lack of knowledge of the EBP process and lack of skill in critically appraising research. [6][7][8] Teaching EBP is therefore an important step in promoting evidence-based clinical decision making.
Dovepress
130 Lizarondo et al Allied health practitioners need to understand the principles of EBP before they can apply it.
Early EBP educational programs include the development of clinical questions, literature searches, and critical appraisal. 9 To evaluate the impact of such educational programs and document competence of individual practitioners, educators need objective and psychometrically sound instruments or assessment tools. Based on a review of the literature, the Fresno test is the only available instrument that comprehensively assesses EBP competence across all relevant domains. 10 The Fresno test consists of two clinical scenarios and 12 short-answer questions that require respondents to formulate a focused question, identify the most appropriate research design that will address the question, show knowledge of electronic database searching, identify issues important for determining the relevance and validity of a research paper, and discuss the magnitude and importance of research findings. 11 The test is scored by using a standardized grading rubric that describes explicit grading criteria. The Fresno test has content validity, good-to-excellent interrater reliability for all questions, and excellent internal consistency. 11 However, this tool focuses on assessing competence in medical students only, and therefore it cannot be used across different health disciplines.
In 2009, McCluskey and Bishop modified the Fresno test to measure the change in EBP skills and knowledge of occupational therapists following exposure to an EBP workshop. 12 New clinical scenarios (ie, versions 1 and 2) were developed to suit rehabilitation professionals, such as physiotherapists and occupational therapists. The 12 questions in the original Fresno test were reduced to seven (ie, questions 1-7), removing questions about diagnosis and complex statistics (ie, questions [8][9][10][11][12]. The scoring rubric was also revised. Similar to the original Fresno test, the seven-item Adapted Fresno test (AFT) measures the following: the ability to develop a focused clinical question using the PICO (population, intervention, comparison, and outcome) format, the ability to develop a search strategy, the ability to interpret and critically appraise a research paper, and knowledge associated with understanding of the hierarchy of evidence and methodological biases in study designs, databases, and other sources of evidence and study designs. The AFT has been reported to have acceptable psychometric properties: interrater reliability ranged from good to excellent for individual items (version 1, intraclass correlation coefficient [ICC] 0.80-0.96; version 2, 0.68-0.94) and excellent for the total score (version 1, 0.96; version 2, 0.91); acceptable internal consistency (Cronbach's α 0.74); and responsive to change in novice learners. 12 The current versions of the AFT are limited to physiotherapists and occupational therapists, and new scenarios and scoring rubrics are required for other allied health disciplines. Therefore, the aim of this study was to examine the validity, interrater reliability, and internal consistency of AFT versions developed for speech pathologists, social workers, and dieticians/nutritionists.
Materials and methods
This study was approved by the Human Research Ethics Committee of the University of South Australia and the Ethics Review Board of the University of Tasmania.
Development and content validation of AFt for speech pathology, social work, and dietetics/nutrition An expert panel consisting of four practitioners from each discipline was formed to content-validate the AFT. Content validity refers to "… how well the combined elements used to construct the instrument truly describe the conceptual domain of interest". 13 The panel represented practitioners with more than 10 years of clinical experience and with previous exposure to EBP training or research. The majority had graduate degrees in their respective disciplines or other clinical areas.
The panel members were presented with the original Fresno test and AFT, and were asked to examine the questionnaire and comment on which questions should be included in the new versions for speech pathologists, social workers, and dieticians/nutritionists. All members agreed that only questions adapted by AFT should be included for these disciplines. Following discussion, new clinical scenarios were developed for each discipline. The scoring rubric of the AFT was considered applicable to the new versions except for questions 1 ("Write a focused clinical question for one scenario to help you organize a search of the literature"), 2 ("Where might you find answers to these and other similar clinical questions? Name as many possible sources of information as you can, not just the ones you think are good sources"), and 4 ("If you were to search for Medline for original research to answer your question, describe the search strategy you might use"). Discipline-specific information was required to revise the scoring key for these questions.
Following consultation with the expert panel, a draft instrument including the clinical scenarios, questionnaire, and scoring rubric was prepared by the primary author. The draft instrument was emailed to the experts for feedback on the clarity of the entire instrument and completeness of the scoring rubric. The instrument and scoring rubric for each
131
Adapted Fresno test for health professionals discipline were revised based on comments from the expert panel and returned to them for a final round of feedback. No further changes were required in the instrument.
participants
The new AFT versions were completed by ten speech pathologists, 16 social workers, and 12 dieticians/nutritionists who agreed to participate in a larger study aimed at examining the impact of a journal club on the EBP knowledge and skills of allied health professionals. 14 They were asked to individually complete either a paper-and-pencil version or electronic version of the questionnaire at a time convenient for them. There were equal numbers of participants who held bachelor's degrees and postgraduate degrees. Less than half had previous training in research or EBP, and the majority had been in clinical practice for less than 10 years.
Interrater reliability of the AFt
Interrater reliability is the "… degree to which measurements of the same phenomenon by different raters will yield the same results, or the consistency of results between raters". 15 Interrater reliability was calculated for individual items and the total AFT score using ICCs (2,1) and 95% confidence intervals. For interpretation of results, ICC values of $0.80 indicate excellent reliability, values between 0.60 and 0.79 denote moderate reliability, and values ,0.60 mean questionable reliability. 16 Four individuals experienced in research and teaching EBP for allied health students served as raters for the study. Before the study began, the raters reviewed and discussed the AFT test, and collaboratively scored a sample test for each discipline. They were then given a practice period, where they scored another set of sample tests, then compared and discussed their differences in scoring. Following discussion, the raters were instructed to score each test independently without conferring or comparing ratings. Raters were given 2 weeks to mark all questionnaires.
Initial examination of the interrater reliability showed poor reliability between raters for questions 2, 4, and 5 of all versions (ie, AFT for speech pathology, social work, and dietetics/ nutrition) and question 7 for social work and dietetics/nutrition. This prompted the first author, who has experience in using the previous AFT versions, to provide further training and discussion of the scoring procedure to the raters. The training involved an explanation of the rating system, discussion of common rater errors, advice on process for decision making, and practice on interpreting the rubric. Those questions with poor reliability were rescored 2 weeks later.
Internal consistency of the AFt
Internal consistency reflects the coherence of the components of a scale or instrument. 17 The internal consistency of the AFT was examined using Cronbach's α.
Results
Content validity of the AFt for speech pathologists, social workers, and dieticians/nutritionists The content validity of the AFT instrument was established through formal feedback from the expert panel. The comments received were consistent across disciplines, and involved issues associated with the wording of the clinical scenarios. No comments were made on the questionnaire itself; however, additional possible answers were suggested for the scoring rubric. For example, in question 1, where respondents are asked to write a focused clinical question, the expert panel provided additional PICO terms or synonyms. Some members of the panel suggested further sources of research information for question 2, such as disciplinespecific electronic databases, websites, and professional organizations.
Two new clinical scenarios and a revised scoring rubric were developed for each discipline. Table 1 shows the final versions of the clinical scenarios. Table 2 lists the questions included in the new AFT versions. A copy of the scoring rubric may be obtained from the primary author upon request.
Interrater reliability of the AFt
The reliability among raters was excellent for questions 1, 3, and 6 across all disciplines, as shown in Table 3. Question 7 showed excellent reliability for speech pathology, but not for social work or dietetics/nutrition. All other reliability coefficients increased to moderate or excellent levels following further training and discussion.
Internal consistency of the AFt
Cronbach's α was 0.71 for speech pathology, 0.68 for social work, and 0.74 for dietetics/nutrition, indicating internal consistency was acceptable for all disciplines. Deletion of any of the items did not improve the internal consistency of the AFT for any discipline.
Discussion
The results provide preliminary evidence of the psychometric integrity of the AFT, and support its use in the assessment A 64-year-old lady with chronic anomic aphasia secondary to stroke has been referred to you for speech therapy. She had been participating in extensive language therapy and felt she was no longer progressing. You would like to know if using constraint-induced aphasia therapy would further improve her language skills.
Social work clinical scenarios
You have received a referral for a 38-year-old male client with alcohol-use disorder. He began drinking 6 years ago to manage work-related stress. He indicates that he wants to reduce his alcohol consumption, but has not been successful. You want to find out whether there is any evidence to support the use of motivational interviewing for alcohol abuse over an educational intervention.
A 52-year-old single lady with a long history of obsessive-compulsive disorder (oCD) has been referred to you for behavior therapy. Her obsessions involve severe fear of contamination and having to urinate. Her compulsions involve excessive washing behaviors and avoiding places without an easy escape or readily accessible bathrooms. She fears being "an oCD" her entire life, having received many years of therapy with little effect on her symptoms. You would like to know if there is value in using acceptance and commitment therapy to reduce feelings of anxiety and distress.
Dietetics/nutrition clinical scenarios
A 58-year-old housewife has been referred to a dietetics outpatient clinic for advice on dietary management of her chronic kidney disease, including high potassium levels. Her urea and creatine levels are significantly higher than normal, and have continued to gradually increase since diagnosis. this patient is not receiving dialysis; however, her renal specialist indicates that this will likely need to commence in a few years if there is ongoing deteriorating kidney function. You want to find out from the published literature the most effective dietary management to prevent progression of the kidney problem in a nondiabetic patient.
You have been referred to a 75-year-old male inpatient for possible enteral or parenteral feeding. the patient was admitted to hospital 3 days ago with severe abdominal pain and vomiting. He has been unable to manage an oral diet, and is currently receiving intravenous fluids. Tests indicate that the patient is suffering from pancreatitis, a condition that he has never experienced before. Doctors are managing the medical condition conservatively at present, with no indications for surgery. You want to find out the best nutritionintervention approach for optimal outcomes.
Table 2 Questions in the Adapted Fresno test
Introduction: please read the two clinical scenarios, and try to answer all of the following questions to the best of your ability. Do not worry if you are unfamiliar with the diagnoses mentioned; this should not affect your answers. You will find most of the following questions quite challenging, and will need to think carefully when answering them. If you are unsure of an answer, please say so. Q1: Write a focused clinical question for one of the scenarios that will help you to organize a search of the clinical literature. Q2: Where might you find answers to these and other similar clinical questions? Name as many possible sources of information as you can, not just the ones you think are "good" sources. Describe the most important advantages and disadvantages of each type of information source you have listed. Q3: What type of study (design) would best answer your clinical question (see Q1), and why? Q4: If you were to search Medline for original research to answer your clinical question, describe the search strategy you might use. Be as specific as you can about which topics and search categories (fields) you would use. Explain your rationale for taking this approach. Describe how you might limit your search if necessary and explain your reasoning. Q5: When you find a report of original research on this question or any others, what characteristics of the study will you consider, to determine if it is relevant? (Q6 and Q7 will ask you how to determine if the study is valid and how important the findings are. For this question, please focus on how to determine if it is really relevant to your practice.) Q6: When you find a report of original research related to your clinical question or any others, what characteristics of the study will you consider, to determine if its findings are valid? (You've already addressed relevance, and Q7 will ask how to determine the importance of the findings. For this question, please focus on the validity of the study.) Q7: When you find a report of original research that relates to your clinical question or any others, what characteristics of the findings will you consider to determine their magnitude and significance (clinical and statistical)?
previously reported validity and reliability of the original Fresno test 11 and AFT versions for rehabilitation professionals (ie, occupational therapist and physiotherapist). 12 The importance of EBP training in facilitating an evidencebased approach to clinical practice has been highlighted by a number of systematic reviews. [18][19][20][21] Many of the training programs reported in these reviews relied on self-report data, which potentially reflect inaccuracies in actual knowledge. 22 Measuring the effectiveness of such training programs therefore requires objective and robust instruments to document changes in the competence of the individuals being trained. To the authors' knowledge, the AFT is the only objective measure of EBP knowledge and skills that has been tested and applied in allied health. McCluskey and Bishop, who first reported about the validity and reliability of the AFT, urged researchers to develop new clinical scenarios and modify the instrument to suit other health disciplines. 12 The current study addressed this gap and provided researchers and educators an instrument to measure EBP skills and knowledge in speech pathologists, social workers, and dieticians/nutritionists. The new versions of the AFT were content-validated, and although the internal consistency of the different versions was slightly lower than the original AFT, the Cronbach's α-values were still acceptable.
The reliability estimates for some of the items (questions 2, 4, 5, and 7) were questionable; however, after further training, the ICCs increased considerably, indicating moderate-to-excellent reliability of scores for these items. This finding highlights the importance of providing train-ing to raters as a strategy to improve interrater reliability. Rater training has been shown to increase consistency of scoring between raters. 23 It emphasizes developing a common understanding among raters so they will apply the rating system as consistently as possible. 24 This common understanding, also called "frame of reference", addresses the common sources of rater disagreements, which include lack of overlap among what is observed, discrepant interpretations of descriptor meanings, and personal beliefs or biases. 24 However, research also suggests that even comprehensive training will not ensure rater agreement. 25 Studies have suggested that a rater's expertise may improve accuracy, 23,26 which implies that rater characteristics are also an important consideration in ensuring consistency between raters. Reliability in examination scoring can be expected if the raters are highly knowledgeable in the domain in which ratings are made. Studies have found a relationship between rater expertise and rating accuracy, as well as the ability to differentiate between different domains in a rating scale. 24,26 The raters involved in this study are experienced EBP educators and researchers, and these attributes could have contributed to the consistency in scoring. Because of their exposure to teaching, the raters may have already gained a wealth of experience in examination assessment, and could be expected to respond well to training. It is therefore not surprising to find that following training in AFT rating, the reliability estimates improved significantly for the previously questionable items. Based on the results of the current study, it appears that there are three important variables that can contribute to rater reliability: an
134
Lizarondo et al explicit scoring criteria (ie, scoring rubric), raters' training, and raters' professional experience. As with any study, this research has limitations that need to be considered when interpreting the results. First, the sample size may have been too small to produce sufficiently reliable results. Second, the expert panel was limited to four practitioners, which may not represent the collective set of views in the different professions. Third, the ability of the test to detect change following educational programs has not been tested.
Despite these limitations, the results of this study provide a valuable resource for EBP educators and researchers who require an objective instrument to measure knowledge and skills among social workers, speech pathologists, and dieticians/nutritionists.
Conclusion
The authors propose the use of AFT in evaluating the EBP knowledge and skills of social workers, speech pathologists, and dieticians/nutritionists. EBP educators and researchers should identify raters with experience in EBP teaching or those with previous EBP training, who should then receive training for AFT scoring. The reliability of raters should be evaluated before they participate in the actual assessment.
While the content validity, internal consistency, and reliability of the AFT have been shown in this study, further research is required to establish its sensitivity to detect change in knowledge and skills following an educational intervention for dieticians, speech pathologists, and social workers. | 2016-08-09T08:50:54.084Z | 2014-02-27T00:00:00.000 | {
"year": 2014,
"sha1": "275d1043c50f0f6323934e409f9dbff2f11d009e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=19158",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7516ec335c11bf4ffb45e94a63b68485d71dc417",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247839430 | pes2o/s2orc | v3-fos-license | Dynamical systems of cosmological models for different possibilities of $G$ and $\rho_{\Lambda}$
The present paper deals with the dynamics of spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological model with a time varying cosmological constant $\Lambda$ where $\Lambda$ evolves with the cosmic time (t) through the Hubble parameter (H). We consider that the model dynamics has a reflection symmetry $H \rightarrow -H $ with $\Lambda(H)$ expressed in the form of Taylor series with respect to H. Dynamical systems for three different cases based on the possibilities of gravitational constant G and the vacuum energy density $\rho_{\Lambda}$ have been analysed. In Case I, both G and $\rho_{\Lambda}$ are taken to be constant. We analyse stability of the system by using the notion of spectral radius, behavior of perturbation along each of the axis with respect to cosmic time and Poincare sphere. In Case II, we have dynamical system analysis for G=constant and $\rho_{\Lambda} \neq $ constant where we study stability by using the concept of spectral radius and perturbation function. In Case III, we take $G \neq$ constant and $\rho_{\Lambda} \neq$ constant where we introduce a new set of variables to set up the corresponding dynamical system. We find out the fixed points of the system and analyse the stability from different directions: by analysing behaviour of the perturbation along each of the axis, Center Manifold Theory and stability at infinity using Poincare sphere respectively. Phase plots and perturbation plots have been presented. We deeply study the cosmological scenario with respect to the fixed points obtained and analyse the late time behavior of the Universe. Our model agrees with the fact that the Universe is in the epoch of accelerated expansion. The EOS parameter $\omega_{eff}$, total energy density $\Omega_{tt}$ are also evaluated at the fixed points for each of the three cases and these values are in agreement with the observational values in [1].
Introduction
In the past two decades many researchers have put tremendous efforts to develop and improve the plethora of theoretical models that explain the accelerated expansion of our Universe. Astrophysical measurements that reveal such a phenomenon put into the quest to give convincing theoretical explanations from various possible directions [2,3,4,5,6,7,8,9,10,11,12]. The dark energy model is one such proposed model that attributes the expansion phenomenon to an energy component with negative pressure so called dark energy which dominates the universe at late time. The simplest type of dark energy is the cosmological constant [13]. In this context of accelerated expansion the theory of general relativity (GR) modified by a cosmological constant term Λ, which is known as the famous Λ CDM model is one of the most popular one [14].
But, despite its fine agreement with the observation data, there are two major issues that have driven our young minds to focus sharply on some modifications to the assumed Λ CDM model, namely, "the cosmological constant problem" which deals with the discrepancy between theoretical and expected values of the cosmological constant [15,16,17]; and "the cosmic coincidence problem [18]. To mend up these issues, running Λ cosmological models have been developed.
Shapiro et al. [19,20,21,22] made the first development regarding the scal-ing evolution of the cosmological constant. Among the running cosmological constant models that have been proposed, it is worthy enough to mention the time dependent cosmological constant motivated by quantum field theory [22,23,24], Λ(t) cosmology induced by a slowly varying Elko field [25], a running vacuum in the context of supergravity [26], etc. In Newtonian gravity, without any requirement of further constraints to be satisfied we can explicitly write the time variation of G. But in GR there are other constraints to be satisfied. For instance if we assume that the ordinary energy-momentum conservation law holds then there should not be any variation in the gravitational coupling with respect to the space time or otherwise the ordinary energymomentum conservation law will be violated [27,28]. In the light of Dirac's idea [29,30,31] which propose that some of the fundamental constants cannot remain constant forever, it is essential to do some modifications in GR field equations [32,33] if we are to consider this running cosmological constant term. In this regard, studying the cosmic scenario with varying G needs modified field equations as well as modified conservation laws. We can mention Brans-Dicke theory where there are modifications of GR with a varying G without violating the ordinary energy-momentum conservation law [34,35,36].
There are many other models that employ varying G theories that give a better understanding of the Universe regarding its late time behavior and nature [36,37,38,39,40,41,42,43,44,45,46,47,48,49]. As there are no rigorous proves that indicate whether the cosmological constant is running or not [49], one can study the cosmological implications of different possible theoretical assumptions of Λ term. Motivated by the quantum field theory [20,21,50] and some theoretical motivations [22,23] about the varying Λ form. Aleksander Stachowski, Marek Szydtowski [51] have also studied the dynamics of cosmological models with various forms of Λ(t).
In this paper, we consider a running vacuum model which evolves in power series of H. Our aim is to set up dynamical systems out of the cosmological field equations by introducing new set of variables and study the stability of the systems in the light of cosmological implications of the system. Based on the possibilities of the gravitational constant G and the vacuum energy density ρ Λ , we develop different dynamical system for three cases and analyze the stability through different approaches by finding respective fixed points. The cosmological scenario associated with each fixed point has been discussed in detail.
We arrange the paper in the following ways. In section 1 we have given the introduction part, in section 2, we give preliminaries that provides a brief introduction on dynamical systems approach to cosmology with some definitions and theorems which will be required to understand the subsequent analysis in the paper. In section 3, we have three cases. In case I of section 3 we show the setting up of cosmological equations and dynamical system analysis where both G and ρ Λ are taken to be constant which is the case of standard Λ CDM cosmology. Under Case I we have three subsections based on analysis using spectral radius, perturbation function and stability at infinity using Poincaré sphere. We present, in Case II, the model dynamics where G=constant and ρ Λ =constant.
Under case II, we have two subsections based on analysis through spectral radius and using perturbation along each of the axis with respect to increase in cosmic time. In Case III we have dynamical system analysis where G = constant and ρ Λ = constant. Under Case III we present three subsections on the basis of analysing stability by the use of perturbation function, Center Manifold Theory and Poincaré sphere. In section 4 we give conclusion of our study.
Stability analysis for each of the cases at the respective fixed points is presented and their corresponding cosmological implications along with the evaluation of various cosmological parameters at the respective fixed points are also obtained.
Preliminaries
Dynamical system is a mathematical system that describes the time dependence of the position of a point in the space that surrounds it, termed as ambient space. Here, we are approaching towards the system through an autonomous system of ordinary differential equations, (ASODE). ASODE is a system of ordinary differential equations which does not depend explicitly on time. S. Surendra et al. [52] have also used this approach to study cosmological models in the presence of a scalar field using different forms of potential. From [52] we can also notice that in three dimensional dynamical system we can analyse stability by analysing the nature of perturbation along each of the axis. A dynamical system is generally written in the form of the following [53]: where x = (x 1 , x 2 , ......x n ) is an element of state space X ⊆ R n and the function The overhead dot denotes the derivative with respect to cosmic time, t. The for all n ∈ N. Otherwise, the fixed point x o will be called unstable; (ii) the fixed point x o is said to be attracting if there exists ζ > 0 such that (iii) the fixed point x o is said to be locally asymptotically stable if it is both stable and attracting. If in the previous item ζ = ∞, then x o is said to be globally asymptotically stable. Jacobian matrix of dynamical system at a fixed point: The Jacobian matrix of the dynamical system given in (1) at a fixed point x o is given by where δfi δxi , i = 1, 2, ..., n denotes the first partial derivative of f i with respect to the i th component x i of the element x = (x 1 , x 2 , ...x n ) ∈ X ⊆ R n .
Linear stability theory is one of the simplest method used to understand the dynamics of a system near a fixed point. In Linear stability theory the function f is assumed to be sufficiently regular so that we can linearise the system around its fixed point. The eigenvalues of the Jacobian matrix at a fixed point play an important role in studying the stability of the fixed point. If at least one of the eigenvalues of the Jacobian matrix at a fixed point x o have zero real part then we can not do stability analysis by using eigenvalues of the Jacobian matrix. Such a fixed point is referred to as non-hyperbolic fixed point. To analyse stability of such fixed points we need a better approach other than the linear stability analysis like Center manifold theory, perturbation function, Lyapunov stability. Centre manifold theory is the most popular method which reduces the dimensionality of the system and determines the stability of the critical points of the parent system according as the stability of the reduced system. Wiggins [53] and Carr [56] have discussed the centre manifold theory in detail.
The eigenvalues of the Jacobian matrix J with order n×n given in Definition 2.5 will have n eigenvalues. The eigenvectors of J associated to the eigenvalues with negative real part spans a vector space called stable space, J s and the eigenvectors associated with positive real part spans a vector space called the unstable space, J u . Similarly J c represents the vector space spanned by the eigenvectors associated with zero real part. Here, the superscript s, u, c denote the dimensions of the respective vector spaces. Also the spaces J s , J u and J c are the subspaces of R n . The space R n can be written as the direct sum of these three subspaces, that is, R n = J s ⊕ J u ⊕ J c . These results have been detailed in Carr's book [56], Elaydi's book [57] and Zhang's book [58]. If at least one eigenvalue of J at a fixed point x o has positive real part then x o will be unstable whether it is hyperbolic or not. But if x o is non-hyperbolic and no eigenvalues has positive real part, then we can use Center manifold theory to determine stability of the fixed point.
Let us consider a two dimensional dynamical system. Using a suitable coor-dinate transformation we can rewrite any system of the form (1) as follows: x = Ax + f (x,y), where A is a c × c matrix having eigenvalues with zero real parts, B is an s × s matrix having eigenvalues with negative real parts and (x,y) ∈ J c × J s . The functions f and g satisfy the following: Definition 2.6. [56] Centre Manifold: it can be locally represented as for a sufficiently regular function h(x) on J s and δ however small it may be.
The proofs of the existence of the centre manifold for the system (2) is also provided in [56] and he has given the dynamics of the system (2) restricted to the centre manifold as follows: for sufficiently small v ∈ R c .
[59] Consider a flow defined by a dynamical system on R 2 where P 1 and P 2 are polynomial functions of x and y. Let P 1m and P 2m denote the m th degree term in P 1 and P 2 respectively. Then, the critical points at infinity for the m th degree polynomial system (9) occur at the points (X, Y, 0) on the equator of the Poincaré sphere where or equivalently at the polar angle θ j and θ j + π satisfying This equation has at most m + 1 pairs of roots θ j and θ j + π unless G m+1 (θ) is identically zero. If G m+1 (θ) is not identically zero, then the flow on the equator of the Poincaré sphere is counter-clockwise at points corresponding to polar angles θ where G m+1 (θ) > 0 and it is clockwise at points corresponding to polar angles θ where G m+1 (θ) < 0.
Theorem 2.2. [59] The flow defined by (9) in a neighborhood of any critical point of (9) on the equator of S 2 , except the points (0, ±1, 0), is topologically equivalent to the flow defined by the following system the signs being determined by the flow on the equator of S 2 as determined in Theorem 2.1.
Theorem 2.3. [59]
Let us consider a flow in R 3 defined bẏ where P 1 , P 2 and P 3 are polynomial functions of x, y, z of maximum degree m.
The critical points at infinity for the m th degree polynomial system (10) occur at the points (X, Y, Z, 0) on the equator of the Poincaré sphere S 3 where where P 1m , P 2m and P 3m denote the m th degree terms in P 1 , P 2 and P 3 respectively.
Theorem 2.4. [59] The flow defined by the system (10) in a neighborhood of (±1, 0, 0, 0) ∈ S 3 is topologically equivalent to the flow defined by the system:
Dynamical system analysis for different possibilities of G and ρ Λ
In this section we present the dynamical system analysis when G =constant and ρ Λ =constant. This is a standard model and we present it as case I of our analysis.
Case I: Dynamical system analysis when G =constant and ρ Λ =constant The Einstein field equations in the presence of cosmological constant Λ are given by where T µν is the ordinary energy-momentum tensor, T µν ≡ T µν + g µν ρ Λ is the modified energy-momentum tensor and ρ Λ = Λ 8πG is the vacuum energy density in the presence of Λ. We assume that the universe is filled with a perfect fluid with velocity fourvector field V µ . With this consideration we have T µν = −p m g µν + (ρ m + p m )U µ U ν , where ρ m is the density of matter-radiation and p m = (γ − 1)ρ m is the corresponding pressure. In the similar way, the modified energy-momentum tensor can be expressed as where p tt = p m + p Λ , ρ tt = ρ m + ρ Λ and p Λ = −ρ Λ is the associated pressure in the presence of Λ. With this substitution in the above expression we have By assuming a spatially flat Friedmann-Lemaître-Robertson-Walker(FLRW) metric along with the above modified energy-momentum tensor [60,61,62,63], we have the following gravitational field equations: where the overhead dot denotes the derivative with respect to the cosmic time t.
With the help of FLRW metric and the Bianchi identities by respecting the Cosmological Principle embodied in the FLRW metric we have the following generalized local conservation law: If we put p Λ = −ρ Λ and p m = (γ − 1)ρ m in the above equation we have the following balanced conservation equation: Since ρ Λ is taken to be constant the right hand side of the above equation vanishes to give the following equation: In addition let us consider that there is reflection symmetry with respect to H, that is , H → −H. So, if the system has λ(t) as its solution then, λ(−t) is also a solution of the system. As a result only the terms containing even powers of H are present in the above power series (19). Shapiro and Solà [22] have also considered in detail the contribution of only the even powers of Hubbble parameter to the time varying Λ(t).
Using (19) in (14), we have where Λ 0 = Λ(H)| 0 and α n ′ s, n = 2i, i = 1, 2, ... are the coefficients in the Taylor series expansion of Λ(H) given by α n = 1 n! d n Λ(H) dH n | 0 , n = 2i, i = 1, 2, ... To set up the dynamical system we consider the following set of new variables: x = ( H 8πG ) 2 and y = ρ m . With this substitution we can expressed (20) in terms of the new set of variables as follows: where Using (21) and the newly introduced variables in the above field equations, we obtain the following set of ordinary differential equations which will represent the required dynamical system: where Θ = ln a denotes the logarithmic time with respect to the scale factor a.
The overhead dash denotes the derivative with respect to Θ while the overhead dot denotes the derivative with respect to cosmic time t.
x ′ = 1 8πG Here we consider only a few powers of H beyond the term C o so as to ensure a better ΛCDM limit. All the other terms involving higher powers of H are neglected as their contribution is completely negligible at present [64] that is, To analyse stability, firstly we need to find the fixed points of the system.
For this we equate x ′ = 0, y ′ = 0, that is, This implies This implies either y = 0 or γ = 0. We can also have y → 0 in evaluating the fixed point. We need to observe both the possibilities and their implications to the evolving cosmological scenario. When y = 0 in the expression of x . Again when γ = 0 then from (18) we see that ρ m = constant. Let us suppose that ρ m = ξ, that is, y = ξ. Then the second fixed point we have obtained for the case of γ = 0 is F 2 = ( −Co−ξ (α2−3)8πG , y = ξ). When we consider y → 0 we will obtain a special case of non-hyperbolic fixed points called a normally hyperbolic fixed point which is actually a set of non-isolated fixed points. For normally hyperbolic fixed points stability is decided by the sign of real part of the remaining eigenvalue even if one of the eigenvalue of the Jacobian matrix vanishes. So when we choose y → 0 then we can write the fixed point as . Now let us evaluate the Jacobian matrices J F1 , J F2 and J F3 at the respective fixed points to study the stability of the system.
The Jacobian matrix at the respective fixed points are given by The above matrices are upper triangular matrices. We all know that the eigenvalues of the Jacobian matrices are given by the diagonal entries. So, the The fixed points F 1 and F 3 are hyperbolic for γ = 0 as none of the eigenvalues vanishes.
Fig. 1 and
When α 2 > 3, the eigenvalues of J F1 possess opposite signs which shows that F 1 behaves as a saddle fixed point. Fig. 3 shows the phase plot of the system for α 2 = 4 > 3 where trajectories in some directions are attracted towards F 1 while trajectories along some other directions are repelled away from it. For the fixed point F 2 we see that J F2 is non-hyperbolic as one of the eigenvalues, namely, EV J2 2 = 0. For non-hyperbolic fixed point F 2 we can not analyse stability using the above linear stability theory. Since it is a two dimensional dynamical system we can use the notion of perturbation function and spectral radius of the Jacobian matrix for the non-hyperbolic fixed point F 2 to analyse the stability. In the subsequent paragraph we will show the stability analysis using these methods .
A. Stability analysis for F 2 using the concept of Spectral radius: Let's rewrite the Jacobian matrix at the fixed point F 2 as follows: The spectral radius of a matrix is the maximum of the absolute values of all the eigenvalues of the matrix. The stability of a fixed point (x, y) of a dynamical system can be determined by the value of spectral radius of its Jacobian matrix evaluated at the fixed point. The notion of spectral radius in discussing stability of a fixed point has been given in detail in [57].
The spectral radius of the above Jacobian matrix is given by From the above arguments, F 2 is locally asymptotically stable for 3 < α 2 < 4 or 2 < α 2 < 3. It can be noted that we have assume α 2 = 3 here so that we can study our system with fixed points in finite phase plane.
B. Stability analysis for F 2 using the concept of Perturbation function: To analyse stability in a simpler way we find perturbation function along each axis as a function of logarithmic time Θ. It is noted that while studying perturbation along x−axis we assume y = 0 as we are analysing only along x−axis. We can make the interval where α 2 lies finer by analysing the stability from this side of perturbation function. Now to find the perturbation function we perturb the system by a small amount, that is, x = −co−ξ (α2−3)8πG + η x and y = ξ + η y , where η x and η y represent small perturbations along x and y axes respectively. With these perturbed system, (22) and (23) takes the following form: Solving the above differential equation we obtain η x as a function of logarithmic time, Θ as follows: Similarly, When α 2 < 3, as Θ tends to infinity the perturbation along x-axis, η x evolves to a constant value which is ξ (α2−3)8πG . In the above expression of η y if we con-sider Θ → ∞, we get ∞ ∞ form. So we can apply L Hospital's rule of finding limit in the expression of η y to obtain its limiting value as −ξ for any value of γ . We can also directly put γ = 0 in (23) to get η ′ y = 0 and obtain η y =constant. But by doing so we won't be able to show the nature of η y in terms of Θ and further with (26) we can achieve the constant value towards which η y evolves in a finer way. As perturbation along both the axes evolve to a constant value when α 2 < 3, we conclude that F 2 is stable for α 2 < 3 and it is locally asymptotically 3). The perturbation plots shown in Fig. 4 shows the variation of perturbation function along y axis with respect to Θ for F 2 . From Fig. 4 we see that when γ = 0, η y becomes a constant function, but if γ = 0 then as Θ → ∞, η y takes ∞ ∞ form. So by applying L Hospital's rule as Θ → ∞, η y tends to −ξ which is a constant value. Fig. 5 shows that the perturbation along x−axis tends to a constant value, namely, ξ (α2−3)8πG when α 2 < 3. In the plot shown in Fig. 5 we take ξ = 1, 8πG = 1 and α 2 = 2.5 < 3 to show that η y tends to In terms of the variables x and y we obtain the value of effective equation of state ω ef f and total energy density Ω tt as follows: where vacuum energy density, Ω Λ = the observational data in [1]. Also when we evaluated the above cosmological parameters at the fixed point F 2 , for any value of α 2 and ξ we obtained ω ef f = −1. The relative energy density at F 2 is found to be Ω tt = 1. The above results have been tabulated in TABLE I: shows the phase plot for stable F 1 at γ = 2, α 2 < 3. point. Fig.4 shows variation of η y with respect to Θ for F 2 .
0.5 C. Stability at infinity and Poincaré sphere: The detail explanation of Poincaré sphere and behavior at infinity is given in [59]. By using stereographic projection we can study the behavior of trajectories far from origin by considering the so-called Poincaré sphere where we project from the center of the unit sphere the (x, y)−plane tangent to S 2 at the north pole [59] by using the transformation of coordinates given by The equations defining (X, Y, Z) in terms of (x, y, z) are given by The critical points at infinity are mapped on the equator of the Poincaré sphere. We consider the following flow in R 2 : The degree of this polynomial system is one and let f 1 and g 1 denotes the homogeneous polynomials in f and g of first degree, that is, In terms of the polar coordinates r, θ with x = r cos θ, y = r sin θ, we can express the above equations as Order of r in (30) as r → ∞ isī = 1 and that of (31) isj = 0. Let us denotē Then using Theorem 2.1 we find G 2 (θ) which is also equal to the highest power term in r of the Θ ′ expression [65].
Solving θ for which G 2 (θ) = 0 we get θ = nπ, where n = 0, ±1, ±2, .... So we can conclude that G 2 (θ) is not identically equal to zero but it becomes zero in those directions where θ takes the value nπ. Since G 2 (θ) has at most 2 pairs of roots θ and θ + π, the equator of the Poincaré sphere has finite number of fixed points located at θ such that G 2 (θ) = 0, that is, at θ = 0, π, π, 2π or equivalently θ = 0, π. At γ = 0, 4 3 and 2, G 2 (θ) takes the following form: The flow on the equator of the Poincaré sphere is counterclockwise at points corresponding to polar angles {θ : where f 1 (x, y) = (α 2 − 3)x − (γ−1)y 8πG and g 1 (x, y) = −3γy. Using (27), the above equation becomes Solving for X and Y from the above equations, we find that fixed point occurs at (±1, 0, 0). Also we see from the expression in (33) that for γ = 0 the flow on the equator of S 2 is clockwise for XY > 0 and counterclockwise for XY < 0. For gamma = 4 3 , the flow on the equator of S 2 is clockwise for XY > 0 and −(1 + α 2 )XY > Y 2 24πG ; and the flow is counterclockwise for XY < 0. For gamma = 2, the flow on the equator of S 2 is clockwise for XY > 0 and −(3 + α 2 )XY > Y 2 8πG ; and the flow is counterclockwise for XY < 0. Using Theorem 2.2 The behavior in the neighbourhood of the critical point (1, 0, 0) is topologically equivalent to the behavior of the following system, Putting the expressions of f, g in (34) and (35) we get The Jacobian matrix of the above system is Since the degree of f (x, y) and g(x, y) is odd, the behavior at the antipodal point (−1, 0, 0) is exactly the same as the behavior at (1, 0, 0). Fig. 8 and Fig. 9 show the phase plot for unstable saddle point and repeller respectively. Fig . 6 shows the phase plot of stable attractor (0, 0) for analysing stability at infinity for case I when γ = 0, α 2 < 3 taking C o = 8πG = 1. Fig. 7 shows the phase plot of unstable repeller (0, 0) for analysing stability at infinity for case I when γ = 0, α 2 > 3 taking C o = 8πG = 1. at infinity for case I when γ = 4 3 , α 2 < 3 taking C o = 8πG = 1. Fig. 9 shows the phase plot of unstable repeller (0, 0) for analysing stability at infinity for case I when γ = 4 3 , α 2 > 3 taking C o = 8πG = 1.
Case II-Dynamical system analysis forĠ = 0 and ρ Λ =constant Let's rewrite the General Relativity field equations (11) as follows: where G µν = R µν − 1 2 g µν R denotes the Einstein tensor. With general Bianchi identity ∇ µ G µν = 0, the above field equation gives the following relation: This implies that the local conservation law takes the following form which we named it mixed local conservation law: If we assume thatĠ = 0 and ρ Λ =constant, then the above relation leads to the following equation which indicates a non-conservation of matter as G does not remain constant here: But if we takeĠ = 0 as well asρ Λ = 0 assuming the standard local covariant conservation of matter-radiation (18), (38) leads to the following equation: Since we are inclined to qualitative study of the dynamics of the Universe, we set up a dynamical system for case-II by introducing new variables: With these new variables the field equations can be rewritten as Again using the Taylor series form of Λ(H) in the field equation 8πGρ m +Λ = 3H 2 , we get Now the dynamical system is represented by the following system of ordinary differential equations: Using the expression ofĠ,Ḣ and Λo 3H 2 we have found above, we get and In order to find the fixed points we equate x ′ = 0 and y ′ = 0. If x ′ = 0, then either y = 0 or γ = 0 as x = 0 otherwise if x = 0, then (41) will be violated.
Again if γ = 0 is considered then we get y = b where b is a real constant and x = a where a, b ∈ R satisfies a(b + ρ Λ ) = 1. So the first fixed point we have obtained here is P = (a, b) where a(b + ρ Λ ) = 1; a, b ∈ R. Now consider y = 0 when γ = 0 then x = 1 ρΛ , that is, Q = ( 1 ρΛ , 0) is the second fixed point. In studying the stability of the fixed points, Jacobian matrix of the system plays a leading role. The Jacobian matrix J 2 of the system is as follows: At the fixed points P , Q, J 2 takes the following form respectively: Since P is obtained when γ = 0, J P becomes a null matrix and hence the eigenvalues of J P are m 1 = 0, m 2 = 0. The eigenvalues of J Q are m 3 = 0, We see that at least one of the eigenvalues vanish at both the fixed points and hence both P and Q are non-hyperbolic. So we need to use the concept of perturbation function as it is easy to analyse the behaviour of the system from the nature of perturbation function expressed in terms of Θ. As Θ tends to ∞, if the perturbation alone each of the axes grows then the fixed point is unstable whereas if the perturbation along each of the axes decays to zero or evolves to a constant value, then the fixed point is stable. We shall not employ Center manifold theory for two dimensional problems as it is simpler to use the method of perturbation function, but for higher dimensional problems as Center manifold theory is one of the prominent tools to study stability of a system, we have also shown in the later part, namely, Case III of this section how the dynamics of the center manifold determines the dynamics of the entire system.
A. Stability analysis using the concept of Spectral radius of the Jacobian matrix at the respective fixed points: The spectral radius of J P and J Q are given by . Since σ P < 1, all the eigenvalues of J P lie inside a unit disc. So P is stable.
When γ > 0, σ Q < 1 if γ < 1 3 and σ Q = 1 if γ = 1 3 . So, Q is stable for 0 ≤ γ < 1 3 and we can't say whether Q is stable or not if γ = 1 3 . In addition when γ = 1 3 one eigenvalue of J Q ,namely, −3γ, has absolute value equal to one the other eigenvalue, that is, zero has absolute value less than one. In this case a bifurcation may occur where a small change in the parameter values of the system leads to a sudden qualitative change in terms of topological behavior of the system. We need to further our study from the concept of perturbations along each axes and study the behaviour of perturbations when Θ → ∞.
B. Stability analysis using the concept of Perturbation function:
Let x = x P +η x and y = y p +η y , where x P , y P are the values of x, y at P and η x , η y are small perturbations along x−axis and y−axis respectively. Putting the perturbed value of x and y in the dynamical system equations (44) and (45) leads to the following relations: where c 1 is an arbitrary constant. Similarly, at fixed point Q we get where c 2 is an arbitrary constant.
As Θ increases and tends to ∞, η y for P evolves to a constant value for all γ ∈ [0, 2] and η y for Q also converges to zero for all γ ∈ [0, 2]. Since the perturbation along each axis does not grow with the increase in Θ, P is stable for all γ ∈ [0, 2], in particular for γ = 0. When γ = 0 η y → −b as Θ → ∞ but if we directly put γ = 0 in the expression of η y above, η y becomes a constant function, η y = c 1 − b. Fig. 10 shows the variation of perturbation along y−axis , η y with respect to Θ as γ → 0 + for the fixed point P . From Fig. 10 we see that as γ → 0 from the right the curves gradually tends to η y = c 1 − b. Fig. 11 shows that η y decreases exponentially as Θ increases and ultimately decays to zero as Θ tends to ∞ for Q for any positive value of γ. So it is obvious that η y → 0 as Θ → ∞ for γ = 4 3 also which is 1 3 as determined from the concept of spectral radius. So Q is also no doubt stable for all 0 < γ < 1 3 . We have calculated the value of effective equation of state parameter ω ef f = −1 − γxy and relative energy density Ω tt = Ω m + Ω Λ , where Ω m = xy, Ω Λ = Λo 3H 2 + α2 3 = 1 − xy. At both the fixed points P and Q, we get ω ef f = −1, Ω tt = 1 which is in agreement with the observational data in [1]. Since ω ef f is found to be negative unity, the presence of the stable fixed point P indicates the presence of negative pressure in the developed cosmological model which contributes to our model with an accelerated expansion phase of the Universe. We tabulated the results in Fig. 11 shows the variation of η y with respect to Θ for Q at Case III-Dynamical system analysis forĠ = 0 andρ Λ = 0 In this case both G and ρ Λ are no longer constants, that is,Ġ = 0 anḋ ρ Λ = 0. The relation in (38) now becomeṡ We introduce the following new variables to set up the corresponding dynamical system: x = 8πG 3H 2 , y = ρ m , z = ρ Λ . We take derivative of the newly introduced variables with respect to logarithmic time, Θ and obtain the following relations: Using (46) in the above equation and the necessary substitutions we get Putting the above expression of z ′ in (48), we get the expression of y ′ as follows: Finally putting the value of y ′ above in (47), we get the expression of x ′ as follows: The expression of total energy density Ω tt and effective equation of state ω ef f in terms of the variables x, y, z is as follows: where p tt = (γ − 1)y − z and ρ tt = y + z. We equate x ′ = 0, y ′ = 0, z ′ = 0 using (51), (50) and (49) to obtain the fixed points. As y → 0, z → 0, then since x, y, z holds the relation 1 y+z = x, x must tend to infinity. If we view from the sequential approach of real analysis, any real sequence of the form 1 n converges to zero as n → ∞ but never equals to zero. For every ǫ > 0 there exist a positive integer m such that | 1 n − 0| < ǫ for all n ≥ m, that is, in every neighbourhood of zero there contains infinite members of the sequence 1 n . Similarly when n → 0, 1 n → ∞. So as y → 0, z → 0 x must tends to infinity. To ensure that the fixed points obtained are physically feasible with the developed system, α 2 must be equal to 3 and with this consideration we can analyse our fixed points in the finite phase plane. Let us consider x ′ = 0, y ′ = 0, z ′ = 0 at α 2 = 3, then as y → 0.0009, z → 0, x must also tends to a number, l = 1 (0.0009+0) = 1111. Let this fixed point be denoted by S = (x → l, y → 0.0009, z → 0). Stability of the above fixed points is determined by the eigenvalues of the Jacobian matrix J 3 of the above dynamical system which is obtained as follows: The above matrix is a 3 × 3 matrix. The eigenvalues of J 3 at the fixed point determines the stability of the fixed point. At S when γ = 0, J 3 takes the following form: The above matrix is a 3×3 matrix with eigenvalues 0, −16.74, −(α 2 −3) = 0.
Since some of the eigenvalues becomes zero, S is a non-hyperbolic fixed point.
We analyse stability through perturbation function and center manifold theory as it is a three dimensional problem with the fixed point as non-hyperbolic one and using these methods are more suitable.
A. Stability analysis for S using the concept of Perturbation function : We perturb the system by a small amount putting x = x F + η x , y = y F + η y , z = z F + η z where x F , y F , z F represent the values of x, y, z at the fixed point to be analyzed for stability and η x , η y , η z denote the perturbations along x, y, z axes respectively. With these perturbed values in the dynamical system equations (51), (50) and (49) and necessary substitutions, we obtain the following perturbations as a function of logarithmic time Θ: for any γ and α 2 = 3. (54) C 2 e (57−6.4α2)Θ , γ = 4 3 ; C 2 e (88.5−6.4α2)Θ , γ = 2.
where C i , i ∈ κ are arbitrary constants and κ is the index set.
is any real constant }. If we consider only the expression of η x obtained as a function of Θ regardless of restricting the value of α 2 , then we can see that when Θ → ∞, η x → C 1 − l for α 2 = 3, η x → −l for α 2 > 3, η y → C 2 for any positive value of α 2 . Similarly it is seen that η z exponentially increases for α 2 > 1.67. So we fail to obtain such value of α 2 where all of these η x , η y , η z decay or evolve to a constant value as Θ tends to infinity. So Φ is an empty set. Only when all of these η x , η y and η z decay to zero or tends to a constant value when Θ → ∞, we can conclude that the fixed point is stable otherwise unstable if at least one of them go on increasing as Θ → ∞. For S to be stable Φ should not be an empty set. Fig. 12, Fig. 13 and Fig. 14 show the perturbation plots for S at γ = 0.
From Fig. 12, as α 2 → 3 − , the slope of the curve gradually decreases and as α 2 becomes exactly equal to 3, the slope of the curve equals zero and then as α 2 becomes just greater than 3, η x becomes an exponentially decreasing function of Θ. So when α 2 > 3 as Θ → ∞, η x exponentially decreases and evolves to a constant value, namely, −l. Fig. 13 shows that η y → 0 as Θ → ∞ for γ = 0 and any value of α 2 . But from Fig. 14 it is clear that when α 2 ≥ 3, η z exponentially increases as Θ increases and continue to grow as Θ → ∞. So S is unstable for any value of α 2 . Hence, S is unstable for α 2 = 3 also. In this case III, we have already presumed α 2 to be equal to 3 in order to ensure that the fixed point S obtained above is physically feasible with respect to the dynamical system we have set up. So using the above arguments we conclude that S is unstable from the side of perturbation function. We will also show the use of Center manifold theory in determining the stability of the fixed point S. Center manifold theory is one of the most powerful tools to determine stability for non-hyperbolic fixed points as the nature of orbits on a center manifold reflects the nature of the system in the neighbourhood of the fixed point. To use Center manifold theory we need to transform the dynamical system equations into the standard form to study center manifold theory. We know that S(x → l, y → 0.0009, z → 0) is a non-hyperbolic fixed point. Now using a suitable coordinate transformation we need to transformed the system in the required standard form to apply Center manifold theory for it will not change the nature of the fixed point.
We present how to analyse stability using the Center manifold theory in the following section. B. Stability analysis for S using Center Manifold Theory: Firstly, we need to transform the dynamical system equations into the form required to use center manifold theory. For this we need to shift the fixed point to origin (0, 0, 0) by doing suitable coordinate transformation as follows: In terms of this new coordinates our dynamical system equations (51), (50)and (49) with α 2 = 3 can be written as follows: The Jacobian matrix of the above system at origin is need to find the stable subspace E s generated by the eigenbasis associated with the negative eigenvalues, the center subspace E c generated by the eigenbasis associated with the zero eigenvalue of above Jacobian matrix. The eigenspace associated with zero eigenvalue can be found out by solving for x 1 , x 2 , x 3 in the following matrix equation: where I 3×3 and O 3×3 represents the identity matrix and null matrix respectively.
Solving the above equations we get the eigenbasis as Similarly we find the eigenbasis associated with the eigenvalues −0.05l and -16.81 so that we can write stable subspace (E s ) as follows: Both E c and E s are the subspaces of R × R × R. Let us define a matrix P whose column vectors are formed by the above eigenbases as follows: P is a non-singular matrix with det(P ) = −646.8l. So P is invertible matrix with P −1 as P −1 = 1 det(P ) Adj(P ), where Adj(P ) denotes the adjoint of P . Therefore We again define a new co-ordinate transformation as: , that is, In terms of the new coordinates U , V , W , X, Y and Z can be expressed as follows: The definition of Center manifold allows us to take h 1 and h 2 in Taylor's We then obtain the required standard form to apply central manifold theory Now computing the above equations we obtain the following relations: The dynamics of the center manifold is given by: The tangency condition requires that By equating the coefficients of U 2 and U 3 in the tangency conditions (60) and (61), we can find the constants a 1 , a 2 and b 1 , b 2 where we unconsider all the powers of U higher than U 3 . Equating the coefficients of U 2 and U 3 in the tangency condition of V , we get a 1 = a 2 = 0 and from the tangency conditions of W comparing the coefficient of U 2 , we get Since l is a very large number, b 1 ∼ 0.8l and comparing the coefficient of U 3 we get Putting the values of a 1 , a 2 , b 1 , b 2 in the dynamics of center manifold we get where j 1 = (−54177l+12186l 2 ) and j 2 = (14522l 2 −32742l+9748)(2l 2 −2l+0.6).
Now when γ = 4 3 we have the Jacobian matrix at S as follows: theory. However stability analysis using Center manifold theory is similar to the above shown. So we will only analyze through perturbation function. From (54), (55) and (56), we see that for α 2 = 3 η x tends to a constant, namely, (C 1 − l) as Θ → ∞ but η y exponentially increases as Θ → ∞. η z is also an exponentially increasing function of Θ and hence it fails to decay or evolve to a constant value as Θ → ∞. Fig. 15 shows the exponential increasing nature of η y and η z at γ = 4 3 , α 2 = 3. Fig. 16 shows the perturbation plot for η x as Θ tends to infinity. So S is unstable at α 2 = 3 and γ = 4 3 . As the perturbation along each of the axis fail to decay or evolve to a constant value we conclude that S is also unstable for γ = 4 3 . For γ = 2 also we can see from (56) that the perturbation along z axis, namely, η z is an exponentially increasing function of θ. So S is unstable for any value of α 2 for γ = 2 and this is shown in Fig. 17 also.
The Jacobian matrix of the above system at the fixed point (0, 0, 0) is a null matrix which has all the eigenvalues as zero. So it is a non-hyperbolic fixed point. We will analyse the stability by finding perturbation functions along each of the axis as a function of logarithmic time Θ by perturbing the system (69) by a small amount. If the system comes back to the fixed point following the perturbation then the system is stable otherwise if the perturbation grows to make the system moves away from the fixed point then the system is unstable. Nandan Roy and Narayan Banerjee [66] has also used the concept of perturbation function to analyse stability for non-hyperbolic fixed points for three dimensional systems where linear stability fails. Now firstly consider the expression of (69) corresponding to +y, +z and +w respectively. Then we perturbed our system (69) by taking y = η y , z = η z and w = η w .
The domain of definition D Θ of the above function at γ = 0 is The domain of definition D Θ of the above function at γ = 4 3 and γ = 2 respectively are as follows: (−∞, k 3 ), α 2 > 16.15, k = 103.4 − 6.4α 2 < 0 With the above domain and the choice of +y on the left side of (69), we cannot analyse our system for Θ → ∞ as Θ becomes bounded above and unbounded below as η y tends to 0, that is, when Θ → −∞, η y → 0. Since we want to analyse the late time behaviour of the Universe as logarithmic time Θ → ∞ we only consider the expressions of (69) corresponding to −y, −z and −w on the left sides of (69) as follows: With this consideration we get the expression of Θ as a function of η y as follows: When Θ → ∞, f (η y ) → ∞ which implies η y → 0. So as Θ → ∞ the perturbation along y− axis decays to zero. For analysing the perturbation along z and w axes we consider the expression for +z and +w from (69) and find out the expression of η z and η w as follows: where c 1 and c 2 are arbitrary constants of integration. As Θ tends to infinity both η z and η w tends to zero. Fig. 18, Fig. 19 and Fig. 20 show the projection of perturbation along y, z and w axes respectively for system (69). Since all of η y , η z and η w decays to zero as Θ tends to infinity, we conclude that the fixed point (±1, 0, 0, 0) is a stable critical point. Fig . 18 shows the variation of Θ with respect to η y for analysing stability at infinity for case III. Fig. 19 shows the variation of η z with respect to Θ for analysing stability at infinity for case III.
Conclusion
In this work we have presented 1) and (2) have supported these analytical results. With the notion of spectral radius we obtained a finer region of α 2 where F 1 is stable, that is, 2 < α 2 < 3. Fig. 3 shows that behaves as an unstable repeller representing the inflationary epoch of the evolving Universe. Fig. 6 and Fig. 7 show the phase plot of the stable attractor and the unstable repeller respectively. For γ = 4 3 , m 1 > 0 and m 2 < 0 when α 2 < 3 and the critical point (1, 0, 0) behaves as a saddle point which is unstable representing the matter dominated phase of the evolving Universe. When α 2 > 3, both m 1 and m 2 are positive and the critical point (1, 0, 0) behaves as an unstable repeller. Fig. 8 and Fig. 9 also support the above analytical results for γ = 4 3 . For γ = 2, the behavior is same as that of γ = 4 3 . Since the degree of the polynomial system f (x, y) and g(x, y) is odd, the behavior at the antipodal point (−1, 0, 0) is exactly the same as the behavior at (1, 0, 0). In case II of section 3, we present the case when ρ Λ = constant but G no longer remains constant. By introducing new variables, we represent the model with a two dimensional dynamical system where we obtain two non-hyperbolic fixed points P, Q. We present the stability analysis of these fixed points by using spectral radius as well as perturbation function where we have found that both are stable for γ ∈ [0, 1 3 ) with Ω tt = −1 and effective equation of state ω ef f = −1. Also for P , both η x and η y converge to a constant value as Θ tends to infinity. When γ = 0 η y → −b as Θ → ∞ but if we directly put γ = 0 in the expression of η y , it becomes a constant function, that is, η y = c 1 − b. Fig. 10 shows the variation of perturbation along y−axis , η y with respect to Θ as γ → 0 + for the fixed point P . From Fig. 10 we see that as γ → 0 from the right the curves gradually tends to η y = c 1 − b. For Q, η x evolves to a constant value and η y decays to zero as Θ gradually increases and tends to infinity as shown in Fig. 11 for γ < 1 3 . So both the fixed points are stable which gives the dark energy model which forms the strong base for the fact that the Universe is undergoing not just expansion but expansion with acceleration. When we take both G and ρ Λ to be non-constants, then we see from case III of section 3 that we can extend the system to a three dimensional problem. We have analysed the system when α 2 = 3 under three different values of γ, that is, γ = 0(dark energy model),γ = 4 3 (radiation dominated model), γ = 2(stiff fluid model) and study the system about its stability and corresponding cosmological implications. At γ = 0 the fixed point S is nonhyperbolic as some of the eigenvalues of the Jacobian matrix vanishes. Since S is non-hyperbolic, we do the stability analysis by studying how the perturbation along each of the three axis vary with the increase in Θ. As the set Φ = φ, S is unstable. Fig. 12, Fig. 13, Fig. 14 shows the perturbation plots for S. We have also used Center manifold theory to analyze stability by using a suitable coordinate transformation where we obtain the standard form to apply Center manifold theory. As the dynamics of the center manifold is unstable we deduce that S is unstable. From both approaches we find that S is unstable. For γ = 4 3 as well as γ = 2, S is non-hyperbolic and unstable. Fig. 15 and Fig. 16 show the perturbation plots of S for γ = 4 3 . The perturbation function along each of the axis fail to decay or evolve to a constant value as Θ → ∞ which shows that S is unstable. Fig. 17 shows that η z continues to increase exponentially as Θ increases which indicates that S is unstable for γ = 2 also. To analyse stability at infinity we use the concept of Poincaré sphere as any polynomial system in rectangular coordinates can be extended to the Poincaré sphere [65]. Here since the system is a three dimensional system, the ideas of projective geometry has been carried over to higher dimension to analyse stability for flows in R 3 [59].
The critical points at infinity occur at the points (±1, 0, 0, 0) on the equator the Poincaré sphere S 3 . Since the perturbation along each of the axis η y , η z and η w decays to zero as cosmic time Θ tends to infinity as shown in Figs. 18, 19 and 20, we conclude that the fixed point (±1, 0, 0, 0) is a stable attractor.
Throughout the entire work the developed cosmological models strongly support the fact that the Universe is in the phase of expansion with acceleration thereby depicting that our model has a deep connection with the accelerated expansion phenomena.
Declaration
The authors declare that there is no conflict of interest regarding the publication of this paper. | 2022-04-01T01:15:35.049Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "ddb43228a24372ded876dd71075312b9f3741d2f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10826-8.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "2c443bceb48fdba00c5b336915b46795a9b73669",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13147196 | pes2o/s2orc | v3-fos-license | Firmness at Harvest Impacts Postharvest Fruit Softening and Internal Browning Development in Mechanically Damaged and Non-damaged Highbush Blueberries (Vaccinium corymbosum L.)
Fresh blueberries are very susceptible to mechanical damage, which limits postharvest life and firmness. Softening and susceptibility of cultivars “Duke” and “Brigitta” to developing internal browning (IB) after mechanical impact and subsequent storage was evaluated during a 2-year study (2011/2012, 2012/2013). On each season fruit were carefully hand-picked, segregated into soft (<1.60 N), medium (1.61–1.80 N), and firm (1.81–2.00 N) categories, and then either were dropped (32 cm) onto a hard plastic surface or remained non-dropped. All fruit were kept under refrigerated storage (0°C and 85–88% relative humidity) to assess firmness loss and IB after 7, 14, 21, 28, and 35 days. In general, regardless of cultivar or season, high variability in fruit firmness was observed within each commercial harvest, and significant differences in IB and softening rates were found. “Duke” exhibited high softening rates, as well as high and significant r2 between firmness and IB, but little differences for dropped vs. non-dropped fruit. “Brigitta,” having lesser firmness rates, exhibited almost no relationships between firmness and IB (especially for non-dropped fruit), but marked differences between dropping treatments. Firmness loss and IB development were related to firmness at harvest, soft and firm fruit being the most and least damaged, respectively. Soft fruit were characterized by greater IB development during storage along with high soluble solids/acid ratio, which could be used together with firmness to estimate harvest date and storage potential of fruit. Results of this work suggest that the differences in fruit quality traits at harvest could be related to the time that fruit stay on the plant after turning blue, soft fruit being more advanced in maturity. Finally, the observed differences between segregated categories reinforce the importance of analyzing fruit condition for each sorted group separately.
INTRODUCTION
Blueberry production has increased rapidly around the world over the last two decades (Lobos and Hancock, 2015). Chile is the second largest global producer, as well as the first exporter of fresh blueberries to the Northern Hemisphere (USA, Canada, Europe, and Asia). Most of the Chilean fruit is sent by boat, with transit periods of 20-50 days depending on destination. Blueberries are highly perishable, so fruit quality upon arrival to the final markets has major relevance to ensure economic returns (Beaudry et al., 1998;Retamales et al., 2014).
Several quality (dust, contaminants, size, bloom, russet/scars, attached stems, flower remains, and color) and condition (decay, mold, wounds, dehydration, firmness, and shriveling) traits are evaluated by inspection companies at destination markets. Among them, and regardless of season, dehydration and softening are the most common defects causing shipment rejections (Moggia et al., 2016b).
At present, due to low availability and high costs of labor for hand picking, farmers are being forced to invest in the mechanization of this critical production phase (Takeda et al., 2008;Xu et al., 2015). Mechanical harvesting of blueberries has the advantages of increasing capacity and efficiency as well as of reducing labor costs, but there are discrepancies as to their real contribution for the fresh fruit market. In general, machine harvest leads to the reduction of the acceptable amount of fruit that can be exported as a result of softening and excessive bruising; nevertheless, promising results have been reported on the use of a particular shaker, this being a viable alternative during critical periods (Lobos et al., 2014b). Fruit can also develop bruising during transport from the field to the packing-house, or when being processed on the packing-lines (Xu et al., 2015).
Blueberries are especially susceptible to mechanical damage, with injured berries resulting in loss of firmness that leads to reduced fruit quality and shelf-life (Xu et al., 2015). Bruises develop in the flesh of the damaged fruit as internal browning (IB) areas, resulting from tissue breakage and oxidation of phenolic compounds (Studman, 1997;Opara and Pathare, 2014). In order to relate the effect of mechanical damage with bruise damage, as done on large fruits and vegetables with instrumented spheres, a blueberry impact-recording device (BIRD) has been developed (Yu et al., 2011(Yu et al., , 2014. Recently, Xu et al. (2015) measured the mechanical impacts on packing lines with the BIRD, showing that most of them occurred at the transfer points and that the highest impacts were recorded in one of the final handling steps, when the sensor dropped into the hopper above the clamshell filler.
Unfortunately, blueberry bruising can be expected to continue occurring, not only because of the use of mechanical/semi-mechanical harvest, or of differences between packing-line designs (e.g., number and height of transfer points, presence/absence of cushion materials), but also because of the lack of enough processing facilities during harvest peaks. Because of this, operators are forced to increase the speed at the sorting/packing lines, increasing the risk that fruit develop softening and IB during postharvest.
By simulating mechanical impact damage (as for other fruit species such as apples), the resistance of blueberries to IB has been evaluated by dropping fruit from different heights onto diverse surfaces; damage is rated on an internal bruise severity scale (affected area) after a period of cold storage Yu et al., 2014). When berries were dropped from 15 to 30 cm onto hard surfaces, Brown et al. (1996) concluded that fruit developed IB on up to 50% of fruit area, and firmness declined significantly in samples having 25% or more damaged area. Yu et al. (2014) also reported a genotype effect, soft-textured cultivars being more susceptible than firm-textured ones when dropped on a hard plastic surface. However, all reported studies omit the high variability in firmness that occurs within a commercial clamshell, and hence the question arises whether results obtained for a given cultivar may be reproducible when variations in maturity stage, environmental conditions, and management procedures affect the proportions of soft, medium and firm fruit on a particular picking.
To the best of our knowledge, there are no previous reports on the implications of firmness segregation at harvest for the development of IB and softening of blueberries maintained under refrigerated conditions. Thus, the objective of this study was to understand how initial firmness and a single mechanical impact could affect the evolution of these traits during postharvest. For this, during two seasons, "Duke" and "Brigitta" fruit were segregated into soft, medium and firm categories at harvest, evaluating firmness loss and IB development of dropped (32 cm) and non-dropped fruit during 35 days under cold storage.
Plant Materials
During two consecutive seasons [2011/2012 (Y1) and 2012/2013 (Y2)], highbush blueberry (Vaccinium corymbosum L.) fruit of cultivars "Duke" and "Brigitta" (6-and 4-year-old, correspondingly) were collected at the peak of the commercial harvest from Chilean orchards located in Longaví (36 • 00 S; 71 • 35 W) and Santa Bárbara (37 • 29 S; 72 • 19 W), respectively. Both cultivars were planted on raised beds, at 3 m × 1 m in a loam soil. Each bed had two drip irrigation lines (2.4 L h −1 each 50 cm); irrigation frequency and timing were determined according to tensiometers established on each block at 30 and 50 cm depth. Pruning (May to July) was oriented to contribute for light entrance and air circulation, assuring a balance between canes of different ages and a stable production over time; pruning consisted in removing canes either unproductive or causing excessive shade on the plant. Fertigation was applied according to soil/foliar analysis and yield estimations; main nutrients were N (90-120 and 10-25 kg ha −1 for "Duke" and "Brigitta, " correspondingly), K 2 O (25-30 kg ha −1 ), and P 2 O 5 (150-180 kg ha −1 ). Environmental conditions are summarized in Supplementary Table S1.
In order to mimic the marketable characteristics of exported fresh fruit, all fruit were harvested upon commercial criterion, which is based on 100% blue color ("Duke" December 5, 2011 andDecember 3, 2012;"Brigitta" December 29, 2011 andJanuary 3, 2013). Berries were hand-picked by qualified workers belonging to each orchard. To avoid potential differences in sorting and packaging facilities, and to reduce IB damage, fruit were harvested directly into plastic clamshells (125 g). Fruit were immediately transported to the laboratory facilities at Universidad de Talca (35 • 24 S; 71 • 38 W), for further analysis and treatment establishment.
Experimental Set-up and Measurements
Upon arrival to the research facilities, fruit were initially characterized in terms of firmness and IB, and then subjected to firmness segregation, impact damage simulation, and finally stored under refrigerated conditions as described below.
Firmness Segregation and Initial Condition on Each Category
Using the same equipment as for firmness assessments, fruit were assigned to one of three firmness categories: soft (<1.60 N), medium (1.60-1.80 N), and firm (1.81-2.00 N). For each season, this segregation represented 50 clamshells (125 g) per cultivar and category, from which each replicate was withdrawn. Then, for each firmness group, the following traits were assessed as initial condition: (i) firmness on five replicates of 20 fruit each; (ii) total soluble solids (TSS, %) using a digital refractometer (Pocket PAL-1, Atago, Tokyo, Japan), from juice obtained from five replicates of five berries each; (iii) titratable acidity (TA, % citric acid equivalents) from five replicates; each one consisted of 10 mL of blueberry juice diluted to 100 mL with distilled water and titrated with 0.1 mol L −1 NaOH to an end-point pH of 8.2; (iv) TSS/TA ratio; and (v) IB on slices of five replicates of 20 fruit each.
Impact Damage Simulation
In order to study the evolution of IB and softening originated by impact damage, half of the fruit within each firmness category group were dropped from 32 cm onto a 30 cm × 30 cm of a hard plastic surface (6.4 mm-thick plexiglass), while the other half remained non-dropped. Dropping height was selected based on previous findings (data not published), as well as reports on extensive bruising resulting from 15-30 cm drop heights onto hard surfaces Xu et al., 2015). For each cultivar, both dropped (32 cm) and non-dropped (0 cm) fruit were placed within clamshells into cardboard boxes, and then stored during 35 days at 0 • C and 85-88% relative humidity (RH).
Firmness and IB Evolution during Postharvest
For each cultivar, firmness category group, and dropping treatment, firmness and IB evaluations were undertaken in samples (five replicates of 20 fruit each) from clamshells removed from cold storage after 7, 14, 21, 28, and 35 days. After each storage removal, fruit were acclimated to room temperature (18 • C) for 3 h prior to perform measurements. Individual fruit were first assessed for firmness and then cut transversally for IB rating.
Statistical Analysis
Firmness and IB condition of commercial fruit at harvest (before firmness segregation) was described for each cultivar and season, through box and whisker plots. Quality traits of fruit segregated at harvest were analyzed considering a completely randomized design with factorial arrangement, considering three firmness categories (soft, medium, and firm) × two seasons (Y1 and Y2). Data of parametric variables were subjected to analysis of variance (ANOVA), and significance of the differences was determined by Tukey's test (p ≤ 0.05). IB data was subjected to non-parametric ANOVA with aligned rank for non-parametric analysis of multifactor designs (Oliver-Rodríguez and Wang, 2013) and mean separation by Tukey's test (p ≤ 0.05) for ranked data.
For the postharvest study, in order to determine the relationships between firmness and IB during storage, data were subjected to regression analysis (r 2 ) and models were fitted for each cultivar, season, firmness category, and drop heights. Additionally, statistical comparisons of slopes and intercepts between models for dropped vs. non-dropped fruit and, between firmness categories of each dropping treatment (soft vs. medium; medium vs. firm, and soft vs. firm) were performed. Data were transformed to obtain linearized models between firmness (x) and IB (y). The best-fitted model was 1/x for both cultivars.
Fruit Condition at Harvest
Firmness and IB before Fruit Segregation When commercial fruit sample was assessed for firmness at harvest, both cultivars displayed a wide range of values (Figure 2A). "Duke" firmness showed similar mean values during seasons 2011/2012 (Y1) and 2012/2013 (Y2) (1.55 and 1.60 N, respectively), whereas higher disparity was found on "Brigitta" (1.52 and 1.92 N, correspondingly). Yet, comparison by Kolmogorov-Smirnov test (p ≤ 0.05) evidenced significant differences in frequency distribution between years for both varieties (data not shown). Additionally, on both cultivars, fruit harvested on Y1 had greater variability (largest and smallest data values, wider quartile distributions, greater number of outliers) than berries picked on Y2. For "Duke, " 55 and 50% of fruit were below 1.6 N (upper threshold of the soft firmness category) for Y1 and Y2, respectively. For "Brigitta" these values reached 60 and 15% for Y1 and Y2, correspondingly. If a threshold of 1.4 N for very soft fruit is considered, 25% (Y1) and 10% (Y2) of "Duke" fruit were below that level, whereas values for "Brigitta" were 42 and 5% for Y1 and Y2, in that order (Figure 2).
Fruit Quality after Firmness Segregation
Once samples were segregated by firmness, the ANOVA proved that fruit quality at harvest was influenced by initial firmness (Table 1). On both cultivars, firmer fruit was related to higher TA but lower TSS/TA and IB; TSS were significant only on "Brigitta, " and higher on the softer group (<1.60 N). Differences between years occurred for TSS, TA, and TTS/TA for "Duke" and for TSS, TA, and IB on "Brigitta, " reinforcing the higher variability found on this last trait during Y1. Significant interactions occurred for TA on "Duke" (with differences between categories on Y1, but no differences on Y2) and for IB on "Brigitta" (with differences only on soft fruit between years, having Y1 higher IB than Y2) (Supplementary Figure S1).
Firmness and IB Evolution of Dropped and Non-dropped Fruit during Postharvest
In comparison to "Brigitta, " "Duke" berries showed lower firmness retention along time, irrespective of firmness category, dropping treatment or season (Figure 3). Between harvest and the end of storage, and for both seasons, firmness of "Duke" blueberries was reduced on average by 39.8, 33.6, and 38.6% (Figures 3A,C,E) for soft, medium, and firm fruit, respectively (data not shown), whereas firmness loss in "Brigitta" averaged 17.3, 24.4, and 23.8%, correspondingly (Figures 3B,D,F). When dropped and non-dropped fruit were compared, "Brigitta" fruit appeared to be more sensitive to initial firmness, since significant differences between damaged and non-damaged fruit were found for most of storage evaluations (medium on Y1; soft, medium, and firm on Y2). In contrast, for "Duke" samples consistent differences between dropped and non-dropped fruit along the whole storage period were observed on soft fruit harvested on Y1 uniquely. Additionally, the magnitude of the differences between dropped and non-dropped fruit, as well as between seasons, were higher for "Brigitta." In general, IB was higher after storage than at harvest, particularly for soft fruit (Figure 4), regardless of cultivar, year, or dropping treatment. "Duke" fruit exhibited relatively low IB values up to 21 days of storage, with the highest IB at 35 days for soft (Y1 and Y2) and medium firmness fruit For a given cultivar, or factor, and significance p ≤ 0.05, different letters within a column represent significant differences (Tukey's test, p ≤ 0.05). a Traits: total soluble solids (TSS), titratable acidity (TA), and internal browning (IB) damage categories: 0 (0-5%), 1 (6-25%), 2 (26-50%), 3 (51-75%), or 4 (>75%). b In red, p-values lower than 0.05. (Y2) (Figures 4A,C). Similarly to the evolution of firmness in postharvest (Figure 3), "Duke" fruit also developed less IB in response to dropping, given that no significant differences between treatments were found at most of the evaluation dates. "Brigitta, " on the other hand, showed marked differences in IB development between dropped and non-dropped fruit for all firmness categories (Figures 4B,D,F). Compared to "Duke" and regardless of dropping treatment, "Brigitta" fruit developed lower IB within medium and firm categories (Figures 4B,D).
Relationship between IB and Firmness
For "Duke" samples, the regression analyses (r 2 ) between IB and firmness (Table 2 and Figure 5) revealed significant effects on dropped and non-dropped fruit for all three firmness categories and for both seasons. Although r 2 varied among comparisons, soft and firm fruit showed in general the highest values. In contrast, 9 out of the 12 models fitted for "Brigitta, " which included all non-dropped fruit of both years and dropped fruit of Y2, showed no significant associations. During Y1, the highest r 2 values for dropped fruit were found on soft and medium fruit of this cultivar (72.7 and 80.6, respectively). The comparisons of slopes and intercepts between dropping treatments ( Table 2) showed that significant differences for "Duke" were found only between intercepts of firm fruit harvested in Y2. In contrast, equations developed for "Brigitta" differed in slopes (soft and medium fruit of Y1) and intercepts (medium fruit of Y1, all three categories on Y2) on five out of the six instances. When firmness categories were contrasted within the same dropping treatment (Table 3), outcomes varied among seasons. On non-dropped fruit of Y1, three comparisons resulted on different intercepts (medium vs. firm on "Duke"; soft vs. medium, and soft vs. firm on "Brigitta"), but no differences were found between slopes. For the same treatment, differences on Y2 occurred amid slopes of "Duke" (medium vs. firm, and soft vs. firm) and intercepts of "Brigitta" (soft vs. medium). Within dropped fruit of Y1 no significant differences were found for any comparison on "Duke, " whereas two cases were statistically significant for "Brigitta" (soft vs. medium differed on intercept and slope; medium vs. firm differed on slopes). On Y2, differences between intercepts of medium vs. firm, and soft vs. firm occurred for "Duke, " meanwhile for "Brigitta" the only significant difference happened between slopes of soft vs. firm fruit.
DISCUSSION
The analysis of fruit characteristics at harvest revealed two important aspects that have not been reported previously. The first one is that, regardless of cultivar or season, high variability in fruit firmness occurred within each commercial harvest. In comparison with other fruit species such as apple, for which very soft fruit (58-62 N) represent less than 0.5-0.8% (Herregods and Goffings, 1993;De Silva et al., 2000), a high percentage of "Duke" and "Brigitta" blueberries showed this characteristic (<1.4 N) in Y1 (25 and 42%, respectively) and Y2 (10 and 5%, respectively). The second one refers to the noticeable differences in quality traits found between firmness categories, which highlights the relevance of analyzing the development of softening and IB for each sorted group separately. These two aspects will be covered during the discussion.
Susceptibility of Blueberries to Develop IB
IB was detected at harvest in this study, even though fruit were carefully hand-picked and not subjected to sorting or packing. Gołacki et al. (2009) occurring during transportation from the field, are difficult to avoid and may also cause damage. In addition to possible damage sources before harvest (e.g., due to wind or machinery), fruit samples used herein underwent a ∼3-h trip from the field to the laboratory, and hence transportation may have impacted the basal IB found. Indeed, unless a packinghouse facility is available at the producing orchard, it is common that fruit travel 2-3 h until being processed. This observation highlights the importance of careful handling of the fruit throughout the whole production and distribution chain, and evidences high differences within a particular cultivar among seasons. In fact, variability in firmness and IB at harvest showed dissimilarities between cultivars, with "Duke" fruit being more homogeneous for both seasons, whereas "Brigitta" berries showed higher differences within and between years. The high IB values in Y1 at harvest for "Brigitta" were associated to softer fruit (Figure 2). Variations in ambient temperature between both seasons (Supplementary Table S1) may partially account for the differences in fruit condition between seasons and cultivars, especially for higher heterogeneity of "Brigitta" samples on Y1.
indicated that vibration forces, usually
Although there is not much information, it has been suggested that an ideal range of temperatures for northern highbush blueberries might range 20-25 • C (Davies and Flore, 1986); values above 30 • C (also associated with high light intensity as in Chile) cause plant damage (Trehane, 2004;Lobos and Hancock, 2015), as well as lowered wax coverage of fruit, which tend to be smaller and softer (Mainland, 1989). With the exception of precipitation (Y1: 32.9 mm and Y2: 102 mm), Longaví does not usually register substantial differences in environmental conditions from early October (full bloom) to early December (harvest) (Supplementary Table S1). This might in part explain the lower variability between seasons observed for "Duke." On the other hand, different temperature patterns for each season were registered in Santa Bárbara in December. Even though more favorable temperatures occurred in Y1 (20-25 • C), more temperature extremes took place (greater number of hours or days hotter than 27, 29, and 32 • C), probably leading to early softening of fruit. It is also highly likely that blueberries can be damaged on packing-lines. Xu et al. (2015) studied 11 commercial packing lines using the BIRD and found that the tested lines differed in their combinations and alignments, thus creating different points for potential impact damage. Yet, all the impacts occurred at transfer points, the highest drop heights being 35-36 cm. Additionally, the latter part of the packing line, where fruit drop into the hopper for loading clamshells, is another point for potential damage due to the combination of hard contact surface (usually stainless steel) and high drop height (Xu et al., 2015), and especially when the first berries drop into the hopper, since they will impact directly onto the hard surface. As more fruit get into the line, ever more fruit-to-fruit impacts will take place, this being a source of impact that has not been fully incorporated in studies dealing with mechanical damage. Results obtained in the present study show that significant differences in IB development between "Duke" and "Brigitta" occurred with drop heights of 32 cm, evidencing a differential effect of season, cultivar, and firmness category.
In order to standardize sorting/packing-lines and to establish some basic recommendations to improve condition, it is critical to identify which fruit would be more prone to softening and IB during postharvest. Unfortunately, given that the main criterion for establishing harvest date of blueberries is skin color, and that high labor costs are associated to this operation Takeda et al., 2008;Lobos et al., 2014b), growers wait for blue fruit to accumulate in the bush before starting commercial pickings. This practice results in fruit with similar external appearance but, as found in the present study, with important heterogeneity in maturity status, that will lead to a wide range of firmness levels at harvest, as well as in softening rates during postharvest. Previous works have proved that delaying harvest increases TSS and TSS/TA but reduces TA and firmness (Woodruff et al., 1960;Ballinger et al., 1963;Kushman and Ballinger, 1963;Lobos et al., 2014a), since TSS increase and acids decrease due to fruit respiration in the course of maturation (Famiani et al., 2005;Dai et al., 2009). In fact, when fruit showing no differences in skin color at harvest (determined either visually or instrumentally) were picked 2 or 6 days after turning 100% blue on the bush, important differences in fruit condition were demonstrated associated to these two maturity stages (Moggia et al., 2016a,b). In those previous studies, when similar percentages of green and pink fruit were reached early in the season, clusters with similar characteristics and canopy position were selected and labeled. Fruit development was followed until both maturity stages were reached: 100% blue and residing on the plant for a maximum of 2 days (ripe), and 100% blue and residing on the plant for 6 days (overripe). That methodology allowed the authors to conclude that, when these two maturity stages were selectively picked, important differences were found, "Duke" being more sensitive than "Brigitta" to this factor. The elapsed time between harvests was enough to increase TSS and TSS/TA of "Duke" samples, and to reduce fruit firmness in both cultivars. These findings reinforce the importance of the time that fruit stay on the plant after turning 100% blue for fruit heterogeneity. In the present study, segregation by firmness at harvest revealed similar trends for these traits, suggesting that fruit within the soft category had actually stayed longer in the plant after turning completely blue. Accordingly, when fruit were segregated based on firmness, berries assigned to the soft category displayed the highest IB, TSS, and TSS/TA values ( Table 1). Given the variability found at harvest (box and whisker plots), these dissimilarities would be higher for Y1 "Brigitta" fruit, thus accounting for the greater differences found according to the dropping treatment between fruit within the soft and the medium categories. In fact, according to the Chilean blueberry industry, overall commercial defects (including softening, dehydration, and mechanical damage) differ between seasons, and the affected produce may account for 10-45% of the fresh fruit reaching final markets (Moggia et al., 2016b).
Bruising as Related to Firmness
Firmness is one of the characteristics most frequently measured to evaluate quality of fresh fruit . As for many other fruit species, firmer blueberries can more readily withstand harvest handling, and will therefore have longer storage potential (Hanson et al., 1993;Yu et al., 2014). Differences in firmness among highbush blueberry cultivars seem to be more dependent on physiological maturity at harvest than on genotypic differences (Beaudry et al., 1998;Lobos et al., 2014a); yet there is limited information on the relevance of firmness at harvest for postharvest quality of fruit within a particular cultivar. Wolfe et al. (1983) demonstrated that firmness separation of blueberries at harvest allows better control of postharvest decay, since soft, medium, and firm fruit show different susceptibility to rot, and fruit segregation enhanced disease control when combined with a hot water dip. Similarly, the present study demonstrates that softening and IB development are related to firmness at harvest of individual fruit, and that high IB can be expected in soft fruit of both cultivars after prolonged storage. Since in this study the highest IB rates were always found for soft berries (<1.60 N), our findings strengthen the idea that midto-firm berries can better withstand a long trip to distant markets. Therefore, any strategy oriented to increase the percentage of these firmness classes into the clamshells will assure higher and more homogeneous quality upon arrival to final destination.
Dropping the fruit did not always lead to higher IB values, and this observation was more evident for "Duke" samples, in which high softening rates but small differences in IB between dropped and non-dropped fruit occurred (Figure 4). This finding agrees with the lack of differences between slopes and intercepts of the models fitted for fruit of this cultivar (0 vs. 32 cm drop heights) ( Table 2); the only difference was found between intercepts of firm fruit, but not between slopes, which indicates similar rates of change in IB per firmness unit both for dropped and non-dropped fruit (Table 2 and Figure 5). Yet, significant associations between firmness and IB, and generally higher r 2 coefficients, both for dropped and non-dropped fruit were obtained for "Duke" as compared to "Brigitta" samples ( Table 2). On the other hand, the fact that "Brigitta" fruit did not show significant associations for most of the equations indicates a weak relationship between firmness and IB development for this cultivar, especially for samples harvested in Y2. However, higher IB levels in dropped than in non-dropped fruit, regardless of fruit firmness at harvest should be expected for this cultivar ( Table 2 and Figure 5). The analyses undertaken for "Brigitta" samples corresponding to Y1 (more heterogeneous in initial condition, and significant r 2 values for dropped fruit uniquely) reveal that differences in slopes and intercepts occurred for all three firmness categories, with different rates of change between dropping treatments. When equations were compared between firmness categories within each dropping treatment (Table 3), variability between seasons became more evident, since significances were not the same in both years considered. Moreover, different slopes (meaning dissimilar rate of change in IB per firmness unit) were found on "Duke" 0 cm and "Brigitta" 32 cm, whereas different intercepts (indicating similar rates, but different damage threshold) occurred on "Duke" 32 cm and "Brigitta" 0 cm. Additionally, most of these differences were observed between soft and firm fruit, which emphasizes the negative effects on quality resulting from a high proportion of soft fruit on a particular picking.
According to these results, each cultivar would display a different pattern of IB development when subjected to mechanical damage. Therefore, and depending upon fruit condition at harvest (initial firmness), fruit might not necessarily exhibit severe IB symptoms but would probably show different softening patterns. Another important aspect to consider is that sectioning berries through the equator detects bruising caused by impacts occurring onto that area, but this procedure does not take into account damage at or near the calyx or stem ends, and it would hence lead to an underestimation of the actual mechanical damage (Yu et al., 2014).
The present study demonstrated the different susceptibility to IB development and softening rates in blueberry fruit among different cultivars and firmness categories at harvest, and suggests that fruit displaying firmness lower than 1.6 N at harvest should be avoided if long-term storage is intended. Galletta et al. (1971) proposed that good keeping quality could be expected when TSS/TA ratios are <18, whereas intermediate keeping quality would result from higher TSS/TA values. Given that TSS/TA ratios at harvest of medium and firm fruit ranged from 15 to 21, and that soft fruit values ranged 19-29, it is suggested that this ratio could be used as an additional index to define harvest time and destination of the fruit (long-vs. short-term storage).
Overall, "Duke" fruit were characterized by high rates of firmness loss, as well as by a strong association between firmness and IB, but little differences were found between dropped and non-dropped fruit. "Brigitta" berries had slower softening rates, and displayed very weak relationships between firmness and IB (especially for non-dropped fruit), but marked differences between dropping treatments were found.
CONCLUSION
Results of this work suggest that the mean firmness value may be not adequate as an indicator of blueberry fruit condition at harvest, and that the differences in fruit quality traits associated to the initial firmness level might be related to the time that fruit stay on the plant after turning blue, softer fruit displaying more advanced maturity. This finding suggests that, during seasons in which adverse environmental events occur (probably associated to high temperatures close to harvest), the proportion and evolution of soft fruit during shipments would enhance rejections at destination markets. Future research should include a more detailed study on potential sources of fruit heterogeneity. Furthermore, more systematic measurements of changes throughout fruit development from early stages, as done for other species, could help in modeling softening and IB during postharvest. Finally, long-time studies are needed to quantify the real genotypic and environmental effects on softening and IB development in blueberries.
AUTHOR CONTRIBUTIONS
CM and GL contributed to the conception and design of the work. GG, CM, and GL performed acquisition, analysis, and interpretation of data for the work. CM, GL, JG, and IL collaborated to generate and validate the version to be published.
FUNDING
In Chile, this work was supported by the National Commission for Scientific and Technological Research CONICYT (FONDECYT 11130539) and the Universidad de Talca (research programs "Adaptation of Agriculture to Climate Change (A2C2)", "Fondo Proyectos de Investigación" and "Núcleo Científico Multidisciplinario"). In Spain, this work was partially supported by "Fundación Carolina" and "Programa de Doctorado en Ciencia y Tecnología Agraria y Alimentaria", Universitat de Lleida. | 2017-05-17T20:06:34.863Z | 2017-04-11T00:00:00.000 | {
"year": 2017,
"sha1": "284f6c41a281e7aef999950fec7065c0db7724b4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.00535/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1066e4a0219a181f84b4e1c1362a29219156b73",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258109158 | pes2o/s2orc | v3-fos-license | Tube formulas for valuations in complex space forms
Given an isometry invariant valuation on a complex space form we compute its value on the tubes of sufficiently small radii around a set of positive reach. This generalizes classical formulas of Weyl, Gray and others about the volume of tubes. We also develop a general framework on tube formulas for valuations in riemannian manifolds.
Introduction
For a compact convex set A ⊂ R m , the Steiner formula computes the volume of the set A t consisting of points at distance smaller than t from A as follows Here the functionals µ i are the so-called intrinsic volumes, and the normalizing constant ω k is the volume of the k-dimensional unit ball. By Hadwiger's characterization theorem, the intrinsic volumes span the space of valuations (finitely additive functionals on convex bodies) that are continous and invariant under rigid motions. The famous tube formula of H. Weyl is the assertion that (1) holds true for A ⊂ R m a smooth compact submanifold and t ≥ 0 small enough, with the additional insight that the coefficients µ i (A) depend only on the induced riemannian structure of A. Even more generally, Federer extended the validity of (1) to the class of compact sets of positive reach. Later on, the same formula has been proven to hold for bigger classes of sets (see e.g. [20,23]). As for the coefficients µ i , the current perspective is to view them as smooth valuations in the sense of Alesker's theory of valuations on manifolds (see [7]).
Already in Weyl's original work, the tube formula was extended to the sphere and to hyperbolic space. In that case, instead of a polynomial on the radius t one has a polynomial in certain functions sin λ (t), cos λ (t) whose definition we recall in (52). Later, Gray and Vanhecke computed the volume of tubes around submanifolds of rank one symmetric spaces (cf. [26]).
All these classical tube formulas are most naturally expressed in the language of valuations on manifolds. Furthermore, this theory has allowed for the determination of kinematic formulas (a far-reaching generalization of tube formulas) in isotropic spaces. These spaces are riemannian manifolds under the action of a group of isometries that is transitive on the sphere bundle. For instance, in [15] and [16] the kinematic formulas of complex complex space forms (i.e. complex euclidean, projective and hyperbolic spaces) were obtained, and Gray's tube formulas on such spaces were recovered.
Tube formulas, however, exist also for other valuations than the volume, and these do not follow from the kinematic formulas. For instance, differentiating the Steiner formula one easily obtains In real space forms (i.e. the sphere and hyperbolic space), Santaló obtained similar tube formulas for all isometry invariant valuations (see [40]). For rank one symmetric spaces, the tube formulas of a certain class of valuations (integrated mean curvatures) were found in [26], still with a differential-geometric viewpoint. There are however many invariant valuations on these spaces that were not considered.
In this paper we prove the existence of tube formulas for any smooth valuation in a riemannian manifold. Then we develop a method to determine these formulas for the invariant valuations of an isotropic space. Using this method we compute all tube formulas explicitly in the case of complex space forms. In fact, our approach also reveals some intersting aspects in the case of real space forms.
Let us briefly describe our results. First, given a riemannian manifold M we construct a family T t of tubular operators on the space V(M ) of smooth valuations of M such that for any µ ∈ V(M ) and every compact set of positive reach A ⊂ M one has for t ≥ 0 small enough (see Definition 4.1 and Corollary 4.7). Differentiating T t at t = 0 yields an operator ∂ : V(M ) → V(M ). If G is a group of isometries of M acting transitively on the sphere bundle SM , the subspace V(M ) G of G-invariant valuations is finite dimensional, and the determination of the tube operators T t reduces to the computation of the flow generated by ∂.
Once this general framework is established we concentrate on the complex space forms CP n λ . For λ = 0 this refers to complex euclidean space C n under the group of complex isometries, and for λ = 0 this is the n-dimensional complex projective or hyperbolic space of constant holomorphic curvature 4λ, under the full group of isometries G. We simply denote V n λ,C := V(CP n λ ) G . For λ = 0, we will readily obtain the tube formulas T t µ of all translationinvariant and U (n)-invariant continuous valuations µ thanks to the existence of an sl 2 -module structure on the space Val U(n) of such valuations. This structure, discovered by Bernig and Fu in [15], is induced by two natural operators Λ, L, the first of which is a normalization of ∂.
Remarkably, it turns out that also for λ = 0 the derivation operator ∂ is closely related to the operators Λ, L of the flat space. Indeed, in Theorem 4.11 we find an isomorphism Φ λ : Val U(n) → V n λ,C such that Using the decomposition of Val U(n) into irreducible components, the computation of the tubular operator boils down to the solution of a Cauchy problem in some abstract model spaces, yielding our main result.
Theorem. There exists a basis {σ λ k,r } of the space V n λ,C of invariant valuations of CP n λ such that where We describe the basis σ λ k,r explicitly in terms of the previously known valuations τ λ k,p of [16]. The tube formulas for the τ λ k,p can be easily obtained from the previous ones, as we also provide the expression of these valuations in terms of the σ λ k,r .
Curiously, the expressions (4) are extremely similar to those obtained by Santaló in the real space form S m λ of constant curvature λ. Indeed, for a certain basis The tube formula for σ m = vol is however quite different. As an explanation for these similarities, we show in Theorem 4.12 the existence of a phenomenon similar (but not completely analogous) to (3). The paper concludes with a detailed study of the spectrum and the eigenspaces of the derivative operator ∂ in V n λ,C and V m λ,R . In particular, we compute the kernel of ∂ in V n λ,C ; i.e. we determine the invariant valuations of CP n λ for which the tube formulas are constant. We also identify the images ∂(V n λ,C ) and ∂(V m λ,R ), and we compute the preimage by ∂ of any element belonging to these subspaces.
2.1.
Valuations. Let V be a finite-dimensional real vector space, and let K(V ) be the space of convex compact subsets of V , endowed with the Hausdorff metric. A valuation on V is a map ϕ : The notion of valuation was extended to smooth manifolds by Alesker (cf. [5,6,10,7]). For simplicity we will focus on the case of a riemannian manifold M n . It is also natural to consider here the class of compact sets of positive reach in M , which we denote R(M ). The definition and some basic properties of such sets are recalled in subsection 4.2.
Let SM be the sphere bundle of M consisiting of unit tangent vectors, and let π : SM → M be the canonical projection.
where ω ∈ Ω n−1 (SM ) and η ∈ Ω n (M ), are complex-valued differential forms, and N (A) is the normal cycle of A (cf. e.g. [20]). We will denote ϕ = ω, η in this case. For any subgroup G ≤ Diff(M ), we will denote by V G (M ) the space of G-invariant valuations; i.e. µ ∈ V(M ) such that µ(gA) = µ(A) for all A ∈ R(M ) and g ∈ G.
The kernel of the map (ω, η) → ω, η was determined by Bernig and Bröcker in [13] as follows. Given ω ∈ Ω n−1 (SM ), there exists ξ ∈ Ω n−2 (SM ) such that is a multiple of α, the canonical contact form on SM . The unique n-form Dω satisfying this condition is called the Rumin differential of ω. Then ω, η = 0 if and only if Dω + π * η = 0, and One of the most striking aspects of Alesker's theory of valuations on manifolds is the existence of a natural product on V(M ), which turns this space into an algebra with χ as the unit element. The realization by Fu that this product is closely tied to kinematic formulas opened the door to the recent development of integral geometry in several spaces, including the complex space forms [1,15,16].
Another important algebraic structure is the convolution of valuations found by Bernig and Fu in linear spaces (cf. [14], but also [9]). This is a product on the dense subspace Val ∞ (V ) := Val(V ) ∩ V(V ) characterized as follows. Given A ∈ K(V ), with smooth and positively curved boundary, we have µ A (·) := vol(· + A) ∈ Val ∞ (V ). The convolution is determined by where + refers to the Minkowski sum. In particular, vol is the unit element of this operation.
Real space forms.
The fundamental examples of valuations in Euclidean space R m are the intrinsic volumes µ k . These are implicitly defined by the Steiner formula where B m in the unit ball and ω i is the volume of the i-dimensional unit ball. In particular µ 0 = χ, µ m−1 = 2 perimeter, and µ n = vol m are intrinsic volumes. We will denote by S m λ the m-dimensional complete and simply connected riemannian manifold of constant curvature λ. That is, the sphere S m ( √ λ) for λ > 0, Euclidean space R n for λ = 0, and hyperbolic space H m ( √ −λ) for λ < 0. Let G λ,R be the group of orientation preserving isometries of S m λ ; i.e. G λ,R ∼ = SO(m + 1) for λ > 0, and G λ,R ∼ = SO(m) ⋊ R m for λ = 0, while G λ,R ∼ = P SO(m, 1) for λ < 0. We will denote by V m λ,R the space of G λ,R -invariant valuations of S m λ . Let κ 0 , . . . , κ m−1 ∈ Ω m−1 (SS m λ ) G λ,R be the differential forms defined in [19, §0.4.4]. In the same paper it was shown that the R-algebra of G λ,R -invariant differential forms is generated by κ 0 , . . . , κ m−1 , α, dα. It follows by [16,Prop. 2.6] that the following valuations constitute a basis of V m λ,R In euclidean space R m these valuations are proportional to the intrinsic volumes: For general λ, the σ λ i are proportional to the valuations τ λ i appearing in [11,24] As we will see, the normalization taken for the σ λ i makes the tube formulas in V m λ,R specially simple. A stronger reason in favor of this normalization is Theorem 4.12.
2.3. Complex space forms. We denote by CP n λ the complete, simply connected n-dimensional Kähler manifold of constant holomorphic curvature 4λ; i.e. the complex projective space (with the suitably normalized Fubini-Study metric) for λ > 0, the complex euclidean space C n for λ = 0, and the complex hyperbolic space for λ < 0. For λ = 0 we let G λ,C be the full isometry group of CP n λ . For λ = 0 we put G λ,C = U (n) ⋊ C n . We denote by V n λ,C the space of G λ,C -invariant valuations on CP n λ . Let {β k,q , γ k,q } ⊂ Ω 2n−1 (SCP n λ ) G λ,C be the differential forms introduced in [15] for λ = 0, and extended to the curved case λ = 0 in [16]. Let also where dvol is the riemannian volume element. It was shown in [15,16] that these valuations µ λ k,q with max{0, k − n} ≤ q ≤ k 2 ≤ n consitute a basis of V n λ,C . It is convenient to emphasize that the µ λ k,q do not coincide with the hermitian intrinsic volumes µ M k,q for M = CP n λ introduced in [17]. For λ = 0 we simply write µ k,q instead of µ 0 k,q . We will also use the so-called Tasaki valuations It will be useful to consider the following linear isomorphisms: More generally, whenever we have a valuation ν in Val U(n) we will denote ν λ := F λ,C (ν). For instance τ λ k,q = F λ,C (τ k,q ).
Tube formulas in linear spaces
Let V be an m-dimensional euclidean vector space. Given t ≥ 0, let T t : Val(V ) → Val(V ) be given by where B m is the unit ball. We will call T t the tubular operator. Let also ∂ : Val(V ) → Val(V ) be the operator given by This operator has sometimes been denoted by Λ in the literature, but following [15] we reserve the symbol Λ for a certain normalization of ∂ (see (18)). The properties of the Minkowski sum ensure that T t+s = T t • T s = T s • T t . Differentating with respect to s at zero yields It follows that For each µ ∈ Val(V ), the map t → T t µ is a polynomial in t of degree m by (12) and the Steiner formula (7) (or by [35]). Hence Note also that, by (15) and (16), the derivative operator ∂ is (m + 1)-nilpotent; i.e. ∂ m+1 = 0. Let us compute the tube formula for the intrinsic volume µ i for each 0 ≤ i ≤ m using (17). For that purpose we first compute ∂. Since T t+s = T s • T t we have On the other hand Differentiating at s = 0 and comparing coefficients yields Finally, using (17), we get which is (2).
In order to compute the tube formulas for invariant valuations in C n (i.e. to determine T t on Val U(n) ), it will be useful to recall the sl 2 -module structure of Val U(n) found in [15]. Consider the linear maps Λ, L, H : Val(V ) → Val(V ), defined as follows where · refers to the Alesker product.
while on Val U(n) one has which implies Proof. The decomposition into irreducible components is as follows where V (m) is the (m+1)−dimensional irreducible sl 2 -representation. In particular, for 0 ≤ 2r ≤ n, there exists a unique, up to a multiplicative constant, primitive element (i.e. anihilated by Λ) in each irreducible component of Val U(n) . By the socalled Lefschetz decomposition, the L-orbits of these primitive elements consitute a basis of Val U(n) . This basis was explictly computed in [15] as follows.
In particular the irreducible components of Val U(n) are the following subspaces We are now able to compute the tube formulas in the complex case using (17).
Proof. By [15,Lemma 5.6], and then Using (17), we obtain the tube formula These tube formulas can also be given in terms of the valuations τ k,q . To this end, we next compute their Lefschetz decomposition.
Proposition 3.5. The Lefschetz decomposition of τ k,r is given by Proof. Consider the linear map ψ : Val U(n) → Val U(n) mapping τ k,r to the left hand side of (33). We need to show that ψ = id. Let us check that this endomorphism commutes with both Λ and L. To check commutation with Λ, we only need to verify the following Comparing term by term, the previous identities boil down to which is trivial. Commutation with L is straightforward using Lπ k,i = π k+1,i . Given that ψ commutes with the operators Λ and L and Val U(n) is multiplicityfree, Schur's lemma implies that for each 0 ≤ 2r ≤ n, there exists a constant c r such that ψ| I n,r 0 = c r id.
By plugging (28) and (33) in (30) one gets the tube formulas T t τ k,p in terms of the τ i,j .
→ SM is the projection on the second factor, and φ t = φ(·, t). We define the derivative operator ∂ = ∂ M by To show that these definitions are consistent, suppose µ = ω, η = 0, and let us check that T t µ = 0 for all t ≥ 0, i.e.
as α vanishes on N (A). Since ω, η = 0, we have N (A) ω = − A η. Therefore T t µ = 0. Let us next establish some basic properties of these operators.
Proof. Given a compact smooth submanifold N ⊂ SM , Since i T and φ * t commute, the result follows.
Together with Lemma 4.1, this yields Evaluating at t = 0, this gives i).
In order to prove ii), it is enough to check that both sides have the same derivative with respect to s, as they clearly agree for s = 0. By (35), we have Since φ * t and i T commute, it follows from (35) If µ ∈ V(M ) G for a group G acting on M by isometries, then also T t µ ∈ V(M ) G . Hence, in case V G (M ) is finite-dimensional, computing T t µ boilws down to solving the Cauchy problem (36) with initial condition T 0 µ = µ; i.e.
This is the approach we will follow to obtain the tube formulas for invariant valuations in complex space forms. Note that (37) coincides with (16) except that ∂ does not need to be nilpotent for general M . For such sets A we will prove that T t µ(A) = µ(A t ) for any µ ∈ V(M ) and sufficiently small t. By the previous definition, there is a well-defined map where γ is the unique minimizing geodesic such that γ(0) = f A (p) and γ(d A (p)) = p. i) for 0 < t < r the restriction F A | ∂At gives a bilipschitz homeomorphism between ∂A t and N (A), preserving the natural orientations, ii) the distance function d A is of class C 1 in A r \ A and φ dA(p) (F A (p)) = (p, ∇d A (p)), ∂A t = d −1 A ({t}) for 0 < t < r. In particular, each level set ∂A t with 0 < t < r is a C 1 -regular hypersurface with unit normal vector field ∇d A .
The following propositions are certainly well-known.
Proposition 4.4. For 0 < s < r = r A the set A s has positive reach and on A r \ A s we have In particular (A s ) t = A t+s for t + s < r.
To check surjectivity, given p ∈ A t \ A take ξ = F A (p), s = d A (p) and note that π • φ(ξ, s) = p.
where it is understood that σ λ −1 = 0. Let us emphasize that (40) would make formal sense but does not hold for i = m − 1.
Note that by (18) Remarkably, a similar identity holds for all λ, which will be crucial for our determination of tube formulas in CP n λ . Theorem 4.11. The linear isomorphism By combining Proposition 4.10, Proposition 3.1 and the fact ωn ωn−2 = 2π n , this is straightforward to check: . A similar phenomenon holds in real space forms, but restricted to a hyperplane of V m λ,R . Theorem 4.12. The linear monomorphism Proof. By Proposition 4.8 and Theorem 3.4 Note the difference of dimensions between the source and the target of Ψ λ . We will show that there is no isomorphism between Val O(m) and V m λ,R intertwining ∂ and Λ − λL. This is essentially due to the fact that (41) and (42) differ from (40).
A model space for tube formulas
We next perform some abstract computations that will easily lead to the tube formulas in both complex and real space forms via (62) and (64). The same approach will allow us to determine the kernel, the image, and the spectrum of the derivative operator ∂ on these spaces.
A system of differential equations. It is well-known that the operators
is the subspace of m-homogeneous polynomials: Motivated by Theorem 4.11, we consider Y λ = Y − λX, which is a derivation on C[x, y]. It will be sometimes convenient to consider the monomials m k x k y m−k . In these terms Our goal here is to solve the following Cauchy problem: find p k : R → V (m) such that i.e. to compute We will use the standard notation which is an analytic function in both λ and t, and cos λ (t) := d dt sin λ (t). Proposition 5.1. For any λ, t ∈ R, we have exp(tY λ )x = x cos λ (t) + y sin λ (t)=: u, exp(tY λ )y = y cos λ (t) − λx sin λ (t)=: v.
Proof. Since clearly In the same way we can compute exp(tY λ )y.
The following standard and elementary fact will be useful.
Lemma 5.2. Let A be an algebra. A vector field on A is a derivation iff its flow φ t satisfies φ t (pq) = φ t (p)φ t (q), ∀p, q ∈ A, ∀t ∈ R.
In other words, each φ t is an A-morphism.
5.2.
Eigenvalues and eigenvectors of Y λ . Given f : V → V an endomorphism of C-vector spaces, we denote by spec(f ) the set of eigenvalues of f and by E α (f ) the eigenspace associated to each α ∈ spec(f ).
Proof. The result is trivial to check for m = 1 as Since Y λ is a derivation as stated.
Remark. It is interesting to notice that the spectra of Y λ and √ −λH, when restricted to each V (m) , are identical. These two operators are thus intertwined e.g. by the linear isomorphism x k y m−k → e k 1 e m−k 2 . 5.3. Image of Y λ . Using Lemma 5.4, we can conclude that Y λ | V (m) is bijective if and only if m is odd. If m is even, then the kernel is one-dimensional. An explicit description is the following.
Proposition 5.5. If m is even, then where Z m,λ := Proof. By the binomial formula if k is even, and Z m,λ (x k y m−k ) = 0 if k is odd. Therefore This shows that im(Y λ ) is a subspace of ker Z m,λ . Given that Z m,λ is not zero, we have dim ker Z m,λ = m, and by Lemma 5.4, we know that the image of Y λ | V (m) has the same dimension. This yields (58) Next we compute, for even m and given ϕ in the image of Y λ | V (m) , the preimage A simple computation using (49) shows where c m,k = 0 if m − k is even, and otherwise With these ingredients at hand, for even m, we can now compute a preimage by Y λ of any element in im Y λ as follows.
6. Tube formulas in S m λ and CP n λ Here we will obtain our main result: the tube formulas for invariant valuations of CP n λ (i.e. the tubular operator T t on V n λ,C ). We will also recover Santaló's tube formulas for V m λ,R (cf. [39]) in a way that explains the similarities between the real and the complex space forms.
6.1. Tube formulas in complex space forms. Recalling (25) and Proposition 3.3, we get an isomorphism I : W n → Val U(n) of sl 2 -modules from to Val U(n) by putting I(y 2n−4r ) = π 2r,r (i.e. mapping Y -primitive elements to Λ-primitive elements) and By Theorem 4.11, the map J λ,C := Φ λ • I : We define and arrive at our main theorem.
Remark. An interesting feature of the previous tube formulas is the following selfsimilarity property, which is explained by (62). Let G n,j λ : V n λ,C −→ V n+2j λ,C , G n,j (σ λ k,r ) = σ λ k+2j,r+j . Then one has T t • G n,j = G n,j • T t .
Theorem 6.2. The tubular operator on V m+1 λ,R is given as follows. For i = 0, . . . , m, In particular and thus These formulas where first obtained by Santaló [39].
Remark. It is worth pointing out the similarity between tube formulas in real and complex space forms. More precisely, note that the isomorphism F n,r : H 2n−4r+1 between the subspaces H 2n−4r+1 λ ⊂ V 2n−4r+1 λ,R and I n,r λ ⊂ V n λ,C commutes with the tubular operator T t . This is explained by (62) and (64). 6.3. Spectral analysis of the derivative map. Here we compute the eigenvalues and eigenvectors of ∂ λ,R and ∂ λ,C . Note that the tube formulas for such valuations are extremely simple: if ∂µ = aµ with a ∈ C, then T t µ = e at µ. Proposition 6.3. For 0 ≤ 2r ≤ n, the restriction of ∂ λ,C to I n,r λ has the following (simple) eigenvalues and eigenspaces: Hence ∂ λ,C diagonalizes on V n λ,C with the following eigenspaces: for −n ≤ j ≤ n.
Remark. We conclude from Prosposition 6.4 and Lemma 5.4 that there is no isomorphism between Val O(m) and V m λ,R intertwining Λ − λL and ∂ λ,R . Indeed, these two operators have different spectra no matter the parity of m.
6.4. Stable valuations in complex space forms. We say that a valuation ϕ ∈ V(M ) on a riemannian manifold M is stable if ∂µ = 0, or equivalently, if T t µ = µ for all t. By Proposition 6.4 and 6.3, up to multiplicative constants, the Euler characteristic is the unique isometry-invariant stable valuation in S m λ . The complex case is more interesting. ) , for each 0 ≤ 2r ≤ n.
Next we express the Euler characteristic as a combination of the stable valuations ψ 2r . Note in particular that χ is not confined to any ∂-invariant subspace I n,r λ .
Proof. Since χ is stable, it can be expressed as χ = j a j ψ 2j . By [16,Theorem 3.11] The coefficient of τ λ 2r,r in this expansion is Hence a r = λ π r 2r r r! 4 r ω 2n−2r and the result follows. 6.5. Image of ∂ λ,C and ∂ λ,R . Next we describe the image of the operators ∂ λ,C and ∂ λ,R , and we compute the preimage of any element belonging to them. Proposition 6.7. Given any ϕ = k,r a k,r σ λ k,r ∈ V n λ,C , we have ϕ ∈ im ∂ λ,C if and only if n−2r l=r a 2l,r n − 2r l − r λ n−l−r = 0, for 0 ≤ 2r ≤ n.
Proof. Note that ϕ = r ϕ r with ϕ r = k a k,r σ λ k,r is the decomposition of ϕ corresponding to V n λ,C = ⌊n/2⌋ r=0 I n,r λ . By (62) and Proposition 5.5 we have ϕ ∈ where we used (59).
Proof. This follows at once from Proposition 5.6 after decomposing ϕ = r ϕ r as in the previous proof.
Proof. By (8) and (9) where the term between brackets appears only if m − k is even. Using Proposition 4.8, this yields (80). The rest of the statement follows. | 2023-04-14T01:16:04.641Z | 2023-04-13T00:00:00.000 | {
"year": 2023,
"sha1": "2a7b0e597284526aa3a0e3ab1e02f92636610580",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s00208-024-02929-2",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "2a7b0e597284526aa3a0e3ab1e02f92636610580",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
233430023 | pes2o/s2orc | v3-fos-license | Adhesion Molecules in Non-melanoma Skin Cancers: A Comprehensive Review
Basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) are the most frequently diagnosed cancers, generating significant medical and financial problems. Cutaneous carcinogenesis is a very complex process characterized by genetic and molecular alterations, and mediated by various proteins and pathways. Cell adhesion molecules (CAMs) are transmembrane proteins responsible for cell-to-cell and cell-to-extracellular matrix adhesion, engaged in all steps of tumor progression. Based on their structures they are divided into five major groups: cadherins, integrins, selectins, immunoglobulins and CD44 family. Cadherins, integrins and CD44 are the most studied in the context of non-melanoma skin cancers. The differences in expression of adhesion molecules may be related to the invasiveness of these tumors, through the loss of tissue integrity, neovascularization and alterations in intercellular signaling processes. In this article, each group of CAMs is briefly described and the present knowledge on their role in the development of non-melanoma skin cancers is summarized.
Cell Adhesion Molecules: Division, Functions, and Physiological Involvement
Cell adhesion molecules are transmembrane proteins responsible for cell-to-cell and cell-to extracellular matrix adhesion (7) playing also a crucial role in intercellular signaling and the structure of the extracellular microenvironment. Moreover, they play important roles in physiologic processes like embryogenesis and organ growth, cell migration and differentiation, and wound healing, and are fundamental elements for the maintenance of tissue integrity. However, over 100 different CAM participate in a variety of pathological processes such as inflammation, tumor invasion and metastasis (7). Changes in the expression and function of CAMs have been recently extensively studied. Based on their structures they are divided into five major groups: cadherins, integrins, selectins, immunoglobulins and CD44 family (Table I).
Cadherins
Cadherins are transmembrane glycoproteins and the most studied group of adhesion molecules, divided into classic cadherins (including E-epithelial, N-neural, P-placental), which are the main mediators of calcium-dependent cell-cell adhesions, and nonclassic cadherins, including desmosomal cadherins and newly discovered protocadherins (8). These adhesion molecules play a pivotal role during embryogenesis and morphogenesis and also in the maintenance of adult tissue architecture (8). They are widely present in the normal epithelium and determine its integrity. Major cadherins in the skin include E-cadherin and the desmosomal cadherin desmoglein 1 (9). E-cadherin mediates interactions between keratinocytes and also between keratinocytes and Langerhans cells, which are the main immune cells of the skin (10, 11). Classic cadherins are composed of five extracellular domains, a transmembrane segment and an intracellular domain. The most studied and first identified is E-cadherin, also known as cadherin-1 or epithelial cadherin, which is expressed in almost all epithelial tissues and is responsible for epithelial integrity and cell polarity (12). The cytoplasmic domain of E-cadherin interacts with groups of cytoplasmic regulatory proteins called catenins, which mediate binding of the complex to cytoskeletal actin filaments. The E-cadherincatenin complex plays a key role in cellular adhesion, regulates cadherin stability, and functions in cell signaling (13). In the normal skin, E-cadherin shows homogeneous membranous distribution in the whole epidermis, with the exception of the lower pole of basal keratinocytes, which are in contact with the basement membrane (10).
Loss of expression or impairment of the function of Ecadherin leads to loss of cell polarity and changes in tissue architecture. The epithelial cells may acquire a mesenchymal-like phenotype, detach from their neighboring cells and migrate, and in case of neoplasms form metastases. This process is called epithelial-mesenchymal transition (EMT) and the loss of E-cadherin and expression of Ncadherin, often called the "cadherin switch", is considered as the most important indicator of this phenomenon (1). In this context, E-cadherin is also known as a metastasis suppressor protein (14,15).
Several studies showed abnormal expression of E-cadherin in various skin carcinomas. In BCC, there is an overall tendency for lower expression of E-cadherin, when compared with normal skin, and low expression is more frequently observed in the more aggressive histological types of the tumor (16)(17)(18)(19). The observed reduction of E-cadherin expression was more pronounced in the infiltrative and morpheaphorm BCCs, while the superficial and nodular types generally demonstrate less reduced expression of this molecule (17,20). Moreover, with the higher invasiveness of the tumor, the membranous expression of E-cadherin is reduced and the cytoplasmic expression is increased (21). It is consistent with the current state of knowledge that high Ecadherin expression in the cell cytoplasm and low expression in the cell membrane may be associated with tumor aggressiveness. In support, it has been suggested that correct cell adhesion requires strong membranous activity of Ecadherin (22,23).
In contrast, it is noteworthy that there are reports revealing no difference in the expression of E-cadherin in the BCC specimens, regardless of their histological type (24)(25)(26). A possible explanation of such discrepancy could be the different tissue fixation methods used (25).
The results of the studies in SCC are more consistent. In almost all analyzed SCC specimens, the expression of Ecadherin was reduced compared with normal epidermis (18,(23)(24)(27)(28)(29)(30)(31)(32) and this reduction was more pronounced in less differentiated and more invasive tumors (29,33). However, the contribution of lower levels of E-cadherin in the metastatic process is still debatable, with conflicting results demonstrating either lower or a similar expression of this molecule in tumors with and without lymph node metastases (24,25), although once more, the different fixation method utilized in one of those studies could explain such difference (25).
Many studies on E-cadherin showed its contribution in both SCC and BCC, but only a few investigations have been conducted on N-cadherin in NMSC. N-cadherin is not found in normal squamous epithelium, although is known for angiogenesis stimulation in some cancers (33). In BCC, Ncadherin was found to be up-regulated in the metastatic, but not in non-metastatic tumors (34), while in SCC, the tumors expressing N-cadherin demonstrated higher invasiveness (35). The low number of publications concerning N-cadherin in vivo 35: 1327-1336 (2021) in NMSC might indicate that the EMT phenomenon in which cells lose epithelial and acquire mesenchymal markers needs further investigations in both SCC and BCC, as its mechanism might involve additional molecules, not yet assessed in these tumors.
Like other classic cadherins, P-cadherin is important for cell differentiation during embryonic development as well as for the maintenance of normal architecture of mature tissues (12). This molecule is present in the cells of the basal layer of the normal epidermis, therefore is considered as an indicator of proliferation, and is usually co-expressed with E-cadherin (36,37). However, both BCC and especially the peripheral sites of non-metastatic SCC specimens, demonstrate P-cadherin expression similar to the normal skin, whereas lower expression is observed only in the metastatic SCCs (36,38). Hence, it appears that it does not play a significant role in skin carcinogenesis.
Nonclassic cadherins -desmogleins (DSG) and desmocollins (DSC) -play the role in the formation of desmosomes, which are the structures connecting epithelial cells. Desmoglein 1 is localized mainly in the superficial upper layers, and desmoglein 2 and 3 in basal and suprabasal layers of the normal epidermis, respectively (39). Desmocollins and desmogleins are localized in the upper layer 2 and 3 in the lower layers of the epithelium, respectively (39).
Expression of DSG1 was found to be significantly reduced in all cases of SCC and in cases of nodular BCC, and completely absent in the morpheaphorm and superficial types of BCC (40). Similarly, the reduced expression of DSG3 has also been demonstrated in BCC specimens (39). In contrast, DSG2 is significantly more intensively expressed in both types of NMSC, and its higher expression occurs more frequently in poorly differentiated SCCs (39,41). The discordant expression of these molecules might be related to the fact that DGS2 is regarded as a protein associated with proliferation and aggressiveness of tumor cells, and a marker of lower differentiation (42). As only insignificant differences in the expression of DSC 1-3 in the NMSC have been reported so far, their role in cutaneous carcinogenesis is still not unequivocal (43).
T-cadherin belongs to the nonclassical cadherins and in normal skin is expressed on keratinocytes of the basal cell layer of the epidermis. Although its function in the biology of the skin remains unclear, it has been considered as a suppressor of tumourigenesis in various cancers (9,44). Although low expression of T-cadherin has been observed in both BCC and SCC (44)(45)(46), there is contradictory evidence of even higher expression in BCC specimens, regardless of the histological type (47). Therefore, the role of this adhesion molecule in the development of NMSC appears to be much more complicated and requires further investigations (46,48).
Integrins
Integrins are heterodimeric transmembrane proteins composed of α and β subunits. At present, 18 α and 8 β subunits have been discovered, which in several combinations form 24 integrin complexes (49). These adhesion molecules consist of a larger extracellular domain, a single transmembrane domain, and a relatively small intracellular domain. The extracellular domain binds with connective tissue components such as collagen, fibronectin, laminin, whereas the intracellular domain of integrins interacts with the cytoskeleton. The integrins participate in cell-matrix and also in cell-cell adhesion and play an important role in various physiological processes including embryological development, hemostasis, wound healing, and signal transduction (7,50).
The main integrins of normal human epidermis are α2β1 (collagen receptor), α3β1 and α6 β4 (mainly laminin receptors). The α5β1 and αvβ6 integrins (receptor for fibronectin) are expressed at nearly undetectable level in normal epidermis and their expression increases significantly during wound healing (51,52). Integrins are known to be involved also in EMT. The "cadherin switch" discussed in the previous paragraphs of this article, is also characterized by changes in the expression of integrins from those containing β4 subunits, present in hemidesmosomes, to those containing β1 and β3 subunits, which results in a higher cellular motility (7,53,54).
Few studies on the expression of integrins in NMSC have been conducted. Studies on the expression of α2 β1 and α3 β1 integrins in BCC specimens have revealed high levels of these two adhesion molecules, with a tendency for a higher expression in nodular type compared to the morpheaform (52,(55)(56)(57). Moreover, localization of α3β1integrins corresponded to areas with preserved basement membrane (52). Furthermore, epithelial cells of BCC expressed also αv, α4 and α5, which were not present in the normal epidermis (58).
Staining of SCC for α2β1, and α3β1 subunits showed absence or low expression of these integrins, which could be related to higher malignancy of SCC compared with BCC (52).
Selectins
Selectins (Lec-CAM) are calcium-dependent transmembrane proteins composed of an N-terminal lectin domain, an epidermal growth factor (EGF) domain, 2-9 protein repeats, a transmembrane domain and a small cytoplasmic domain (59). This class of adhesion molecules consists of endothelial (E), leukocyte (L) and platelet (P). E-selectin in normal skin demonstrates a weak expression in the microvessels and -so far -it seems to be the only selectin involved in cancer metastatic activity, probably through the interactions between endothelial cells and tumour surface selectin ligands (50,60). The results of studies on the expression of E-selectin in the normal skin and NMSC revealed its negative expression in normal skin and positive expression in SCC and BCC. Moreover, in cases of SCC, there was a strong positive staining for E-selectin on malignant keratinocytes and vascular endothelium. In contrast, no tumour cell staining, but endothelial expression of E-selectin was observed in BCC (61). Hence, future studies are warranted to examine the involvement of selectins in the biology of NMSC (62,63).
CD44
The transmembrane cell surface molecule CD44, which is present multiple isoforms, is broadly distributed in a wide variety of tissues, including the skin, where it is distributed in the hair follicles, sweat glands and in the epidermis with the exception of the basement membrane and granular and corneous layers (64). The physiological distribution of CD44 is concordant with the distribution of hyaluronic acid as CD44 is thought to be a receptor for hyaluronic acid, and also an important ligand of E-selectin (50,64,65).
The presence of CD44 in NMSC is controversial, although, in BCC, there is an overall tendency for lower expression (66)(67)(68)(69)(70)(71). Some inconsistency in the results of the published studies remains regarding SCC. Although some of them demonstrated that the expression of CD44 in SCC is reduced, and decreased CD44 expression is mostly found in the more invasive and aggressive tumors (65-66, 69, 72), other analyses did not confirm such findings (68,71,73). However, in one of the contradictory studies, the process of specimen preservation was substantially different, as the samples were frozen and not paraffin-embedded, hence the availability of the antigens was significantly higher (74).
The Immunoglobulin Family of Adhesion Molecules
Immunoglobulin-like cell adhesion molecules constitute the largest and most diverse group of adhesion molecules, which includes vascular cell adhesion molecules (VCAMs), intercellular adhesion molecules (ICAMs), neural cell adhesion molecule (NCAMs), platelet endothelial cell adhesion molecule (PECAM), and nectins (7,49). All members of this family consist of an extracellular domain (which contains one or more immunoglobulin-like domains), a single transmembrane domain, and a cytoplasmic tail (75). The VCAM (CD106) is widely distributed and highly expressed primarily in the endothelial cells (76). Its expression increases, especially in states of inflammation, on tissue macrophages, dendritic cells and epithelial cells (76,77). Moreover, it plays the role of an endothelial ligand for VLA-4 (Very Late Antigen-4 or integrin α4β1) of the β1 subfamily of integrins (78). A study on the expression of VCAM in SCC demonstrated its intense presence in poorly differentiated SCC, but only moderate expression in welldifferentiated SCC (79). VCAM-1 was not detected in BCC tissues and negative or weak staining was observed in the epidermis adjacent to BCC (61,80). The few publications concerning VCAM in NMSC might indicate that this in vivo 35: 1327-1336 (2021) molecule does not play any significant role in skin cancer development, although its increased expression in SCC tumors could potentially support the role of inflammation in the pathogenesis of these neoplasms.
The ICAM family consists of five members (ICAM-1 to ICAM-5), which are important elements of intercellular, but not cell-matrix adhesion (81). ICAM-1 (CD54) contains five Ig-like domains and is only occasionally detectable on keratinocytes of the normal skin, mostly in the T-cell-present regions. It is, however, found on endothelial cells, since ICAM-1 is one of the most expressed endothelial surface adhesion molecules (78). The primary role of ICAM-1 in the human skin is adhesion to the LFA-1, present on circulatory leukocytes. Studies on ICAM expression on NMSC revealed lack of expression on BCC tumour cells (61,80,(82)(83)(84) with minimal expression in the peritumoral stroma (80,82,84). In SCC, a dramatic increase in ICAMs in poorly differentiated SCC was shown, while in well-differentiated tumors, there was only focal ICAM expression (79). These results are consistent with the pathophysiology of NMSC, as ICAM-1 expression on keratinocytes increases in chronic inflammatory conditions, which contributes to the development of SCC, but not BCC (81).
NCAM (CD56), another member of the immunoglobulin family, has originally been found on neuronal cells. It plays an important role in morphogenesis, in neural development and mediates intercellular adhesion in various tissues (85). Studies on this adhesion molecule in BCC and SCC revealed strong immunoreactivity in the majority of stained BCC specimens regardless of their histological type, and negative expression in almost all SCC samples (86)(87)(88)(89). These results could indicate the difference in the origin of these NMSCs. As the cells of the hair follicle in the normal skin also demonstrate positive immunoreactivity for NCAM, its strong expression in BCC could be a consequence of its development from these cells, which has been supported by studies in recent preclinical models (86,(90)(91). However, other theories regarding the origin of BCC should also be taken into consideration (92,93).
PECAM (CD31) is physiologically involved in leukocyte migration, signal transduction and angiogenesis (94,95). Staining of NMSC for CD31 showed its elevated expression in the areas adjacent to both SCC and BCC, when compared with normal skin, with expression significantly higher in the areas surrounding SCC (80,96). In the tumor area, CD31 was identified in SCC, but not in BCC (96). However, in another study, the expression of CD31 did not differ significantly between SCC and BCC cases, and no difference was observed when consecutive different grades of SCC progression or the presence of tumor metastases were analyzed (97,98). Hence, it appears that the difference in the metastatic potential of squamous and basal cell carcinomas is not only due to differences in angiogenesis, but also in multiple other factors.
Nectin family comprises four members and regulates various cellular functions, such as mobility, adhesion, proliferation, polarization, and differentiation (99). In the normal skin, nectin 1α was colocalized with E-cadherin in cell-cell adherens junctions of the epidermis and was detected in all living layers of the epidermis with the strongest staining in the spinous layer (100). A study on the expression of nectin 1α in NMSC revealed a reduction in staining, more pronounced in morpheaphorm BCC and in SCC than in the solid type BCC, which is rather consistent with the expression of E-cadherin in NMSC. These results might potentially indicate that reduction of nectin expression could be associated with invasiveness of the tumor, but definitely require further investigation (100).
EpCAM (epithelial cell adhesion molecule) is considered a molecule involved in cellular signaling rather than a cell adhesion molecule, because structurally it does not resemble any of the five major families of CAM, and its classification as an adhesion molecule is still debatable. However, evidence derived from studies on various malignancies indicates that it might be also involved in the development of those pathologies (101).
Physiologically, EpCAM is expressed mainly in glandular epithelial cells, while in pathological conditions, it is mostly overexpressed on the cells of epithelial tumours, but not in non-epithelial tumors (101). There have been reports demonstrating positive staining for EpCAM across all types of BCC (102)(103)(104)(105), while loss of its expression was identified in the front of the tumour and in its infiltrative islands (106). There was a negative expression of EpCAM in SCC, irrespective of the histological type or grade of differentiation (104, 105). Hence, although in non-cutaneous epithelial malignancies, increased expression of EpCAM has been considered as a predictor of worse clinical outcomes (107,108), the evidence derived from studies on NMSCs appears to be insufficient to confirm such hypotheses.
Conclusion
Progression of cancer is a result of altered intercellular interactions and loss of adhesion between the neoplastic cells and the extracellular microenvironment. Numerous studies indicate that differences in the expression of adhesion molecules are related to the invasiveness of skin cancers. Downregulation of some, physiologically present, adhesion molecules has been recently suggested to be a sign of a higher tumor metastatic potential. While in the early stages, both SCC and BCC are treatable, in some cases especially SCC recurs locally and even metastasizes leading to death. Therefore, it is important to identify the more aggressive tumors that require closer follow-up. Besides established prognostic factors like size, anatomic site, clinical and histological type of a primary tumor, adhesion molecules may become additional prognostic biomarkers of nonmelanoma skin cancers. Cell adhesion molecules could be also an attractive therapeutic target because their extracellular domains could be easily accessed by antibodies or small-molecule inhibitors. Therefore, future research can open new perspectives for more effective skin cancer treatment.
Conflicts of Interest
The Authors declare that there are no conflicts of interest regarding the publication of this article.
Authors' Contributions
JCS and JPD contributed to study design, collection of data, writing of the manuscript's draft and the preparation of the final version of the article. | 2021-04-29T13:14:06.016Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "4956d8d28848d16af798d508c03fdd714a2e306a",
"oa_license": null,
"oa_url": "https://iv.iiarjournals.org/content/invivo/35/3/1327.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "4956d8d28848d16af798d508c03fdd714a2e306a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119319540 | pes2o/s2orc | v3-fos-license | Oscillatory criteria for the second order linear ordinary differential equations in the marginal sub extremal and extremal cases
The Riccati equation method is used to establish three new oscillatory criteria for the second order linear ordinary differential equations in the marginal, sub extremal and extremal cases.We show that the first of these criteria implies the J. Deng's oscillatory criterion. An extremal oscillatory condition for the Mathieu's equation is obtained. The obtained results are compared with some known oscillatory criteria.
(1) Definition 1. The equation (1) is said to be oscillatory, if its every solution has arbitrary large zeroes.
Study of the oscillatory behavior of Eq. (1) is an important problem of the qualitative theory of differential equations and many works are devoted to it (see. [1] and cited works therein, [2 -9]). Study of the oscillatory property of Eq. (1) by properties of its coefficients was developed (and now is being developed) in general in two directions. The goal of the firs direction is to study the oscillatory property of Eq. (1) on the finite interval (interval oscillatory criteria: Sturm, J. S. W. Wong [2], J. G. Sun, C. H. Ou and J. S. W. Wong [3], Q. Kong [4]). Then oscillation of Eq. (1) follows from its oscillation on the countable set of finite intervals. The second one studies correlation between oscillatory property of Eq.
; or (ii) The following inequality holds lim sup .
In this work we use the Riccati equation method for establishing three new oscillatory criteria for Eq. (1) in three different directions having relations to the cases when: a) Theorem 3. Let for some f ∈ Ω, λ ∈ R, α ≥ 1 the following conditions be satisfied:
Then Eq. (1) is oscillatory.
Remark 1. For f (t) ≡ 1 the condition 2) of Theorem 3 is always satisfied. Remark 2. The condition 4) of Theorem 3 can be replaced by one of the following conditions Indeed, when one of these conditions takes place we have: Corollary 1. Let for some λ ∈ R the following conditions be satisfied:
Then Eq. (1) is oscillatory.
Let us show that Theorem 1 is a consequence of Corollary 1. Suppose the condition of Theorem 1 holds. Then from the convergence of the integral . Therefore condition A) of Corollary 1 takes place. According to the condition of Theorem 1 chose Then taking into account (2) we will have: where α 0 , α, β, γ are some real constants, α 0 > 1 4 , γ > 0, αβ = 0. Without loss of generality we will assume that .
Since sin(βt) From here and from (4) it follows lim From here it follows that for Eq. (3) condition B) of Corollary 1 is satisfied too. So Eq.
Theorem 5.
Let for some f ∈ Ω the condition 1) and the condition
be satisfied. Then Eq. (1) is oscillatory.
Consider the Matheu's equation (see [10], p. 111) Here δ and ε are some real constants, ε = 0. Set: In this case Theorem 2 is not applicable to Eq. (6). Remark 6. Using standard methods for integration of trigonometric functions it is easy to calculate the value of the integral, presenting in the expression for F ε (µ): Example 3. It is not difficult to verify that m(4) < F 4 (1) = − π+2 2(π+1) . Therefore by Corollary 3 the equation is oscillatory. Remark 7. It is not difficult to verify that the Sturm's comparison criterion (comparison of Eq. (7) with an equation φ ′′ (t) + q 0 φ(t) = 0, where q 0 is any constant > 0) is not applicable to Eq. (7).
3. Proof of the main results. To prove the main results we need in some auxiliary propositions.
This lemma is proved in [11] for the case of continuous a(t) and b(t). For the case of locally integrable a(t) and b(t) the proof by analogy.
This lemma is proved in [12] for the case of continuous a(t) and b(t). For the case of locally integrable a(t) and b(t) the proof by analogy.
Let y(t) be a t 1 -regular solution of Eq. (8). Consider the integral This theorem is proved in [11] for the case of continuous a(t) and b(t). For the case of locally integrable a(t) and b(t) the proof by analogy.
3.2. Proof of rhe main results. Proof of Theorem 3. Suppose Eq. (1) is not oscillatory. Then the equation has a t 1 -regular solution for some t 1 ≥ t 0 (see [13], p. 332). In this equation make the substitution We obtain: Since Eq. (9) has a t 1 -regular solution, from (10) for some t 2 ≥ t 1 , where y * (t) is the unique t 1 -extremal solution of Eq. (11). Let us show that By virtue of (11) we have: Suppose that the relation (13) is not true. Then from (12) and (14) it follows that y * (t) is a decreasing function on [t 1 ; +∞) with a negative finite limit: From here and (14) it follows that +∞ t 1 It is evident that We have: From the condition 4) and from (15) it follows that lim n→+∞ 2y * (+∞) θ α n θn t 1 for some infinitely large sequence {θ n } +∞ n=1 . Obviously by virtue of (17) lim > 0, which contradicts (16). The obtained contradiction proves (13). It follows from the condition 2) that there exists a infinitely large sequence {ξ n |} +∞ n=1 such that S ≡ sup Show that the solution x 0 (t) of Eq. (9) with is t 3 -normal. By 5) from (20) it follows x * (t 3 ) = y * (t 3 ) − where Q 1 (t) ≡ Q 1 (t)+ Due to the equality 7) chose t 3 ≥ t 1 such that k = 1, 2, . . . . It is easy to show that | 2018-09-26T06:51:09.000Z | 2018-09-26T00:00:00.000 | {
"year": 2019,
"sha1": "be146d00c68e94ada993abb0372d42fea89e3e29",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "31c8c057b7dc75e241b22cfc28ce95884bc22791",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
265172479 | pes2o/s2orc | v3-fos-license | Optimal Worker Allocation of Food Manufacturing System using Simulation and Data Envelopment Analysis
This study suggests employing simulation and data envelopment analysis to support decision making for determining the optimal worker allocation for a food manufacturing production line. The simulation model for food production is based on a real-world example from the food industry, focusing on a manufacturing company. The challenge faced by the company involves a constrained number of workers allocated to four processes. The imbalance number of workers in each process will affect the productivity of the company. Through simulation modeling of the actual systems, it was discovered that the filling process presents a bottleneck due to its significantly higher average waiting time compared to the other processes. A valid simulation model was subsequently employed to generate potential improvements. Nineteen improvement models were proposed and assessed based on various selection criteria, including average total production time, average number of entities still in the system, total production, and average resource utilization. Among the nineteen improvement models, Improvement Model 11 (IM11) emerged as the best improvement model. The optimal worker allocation alternative involves assigning two workers to each process, leading to anticipated increases in total production and average resource utilization. Utilizing simulation and data envelopment analysis can help the management of the company to make better decisions in determining the optimal worker allocation.
INTRODUCTION
There are Small Medium Enterprise (SMEs) industries which still use low level technologies and are often relatively more labour intensive in their operations.Innovative processes and products should be introduced to enhance efficiency.However, this problem may not have a straightforward solution due to the limited organizational capacity and resources of SMEs.They often lack efficient and experienced management, as well as a clear strategic vision and mission.Furthermore, the development and implementation of innovation can be challenging for these companies because they struggle with resource allocation, resource adjustment, and the acquisition of necessary information and knowledge.The imbalance in the number of workers at each process could interfere with the process flow of the manufacturing system.Hence it will cause bottlenecks in the system.The proposed multi-agent system is attractive and suitable for solving resource allocation problems.However, the resource allocation problem being solved is static problems.Most resource allocation problems are practically dynamic, involving discrete changes over time.The allocation is made based on the current state of the system.This allocation is maintained until a new event occurs.
Therefore, this study is carried out to help the food manufacturing system industries to solve the imbalance number of workers at each process by determining the optimal worker allocation alternative using simulation and data envelopment analysis (DEA).
METHODS AND DATA
Food and beverage manufacturing companies are always competing in producing high-quality products.Small food and beverage companies can compete to ensure the production process used is up-to-date and efficient.Therefore, looking at the growth of SMEs in the context of the food manufacturing industry in Malaysia, improvements can be made across the entire supply chain, from agricultural product, food processing, food distribution and finally to consumers.This improvement needs to be made because it has a significant impact on the growth of the food manufacturing industry [1].
Manufacturing industries are facing significant challenges due to the continuous growth in manufacturing technology.These challenges encompass meeting customer satisfaction, accurately forecasting product demand, enhancing manufacturing operations' efficiency, thriving in a competitive market, and adapting to technological advancements.In this landscape, simulation emerges as a valuable tool for replicating real-time outcomes, facilitating measurement of productivity.In the current context, industrial technology has progressed into the fourth stage of industrialization, refer to Industry 4.0 [2].To remain competitive, a well-optimized manufacturing system plays a crucial role in swiftly manufacturing innovative products and reducing the time the product being ready to sale in the market [3].A good manufacturing system requires several thoughts, such as production scheduling, production planning and control, the organization of facilities layout, and many others.Therefore, by using simulation, the actual manufacturing system of the food industry will be examined.Subsequently, leveraging the insights gained from this analysis, several improvements can be recommended for the system.
The worker allocation problem is same as the production line problem.This problem is to find the optimal allocation of workers to machines.Either by minimizing the number of machines based on the cycle time or minimizing the cycle time based on the number of machines, taking into account the number of workers constraint.In contrast, in this study the problem differs from the assembly line problem in that it involves allocating workers to machines while ensuring non-crossing walking routes for the workers.
In order to achieve specific objectives and comply with all constraints, labor-intensive production systems need to ensure optimal resource allocation.The problem of worker allocation is more about determining who and where workers will be assigned to specific tasks and locations.The objective is to reduce production time, labor costs and overtime, at the same time the quality of production needs to be improved.In practice, managers will usually allocate workers based on work experience.Randomly allocate workers, it may result in a long production period and excessive costs.
Resource allocation is regarded as one of the very important factors because allocation of workers, reduction of cycle time, scheduling of machines, task allocation all acts as resources which are to be optimized.When certain methods are used to solve the problem, then it will provide a very useful contribution.This recognition is intended to appreciate and acknowledge the methods at resolving worker allocation problems.
Due to its significant impact on optimizing cost and productivity, the use of discrete-event simulation has been extensively used in examining operations for the past three decades.Arena software can be used to develop discrete-event simulation models.The advantage of Arena software is its flexibility in developing simulation models across different areas of industries.Meanwhile, data envelopment analysis is an optimization method that is proposed for improving operations with multiple inputs and outputs.The CCR model, a data envelopment analysis model introduced by Charnes et al. [4] was designed to measure the relative efficiency of homogenous decision-making units.Banker et al. [5] then expand the CCR model, and the model is known as BCC model.In this study, BCC model is applied.The main advantage to this method is its ability to handle multiple inputs and outputs.It is also useful because it takes into consideration returns to scale in determining the efficiency scores.Given the multiple inputs and outputs to be analyzed in this study, the combination of simulation and the DEA-BCC model appears to be an ideal method.
In this study, there will be several steps used as the methodology to achieve the objectives.Firstly, the collection of data for each process, followed by analysing the obtained data using the analysing tools.Next, the data will be used to develop a simulation model based on the actual system.The simulation model then must undergo a verification and validation process.Upon successful validation process, the simulation model will be extended to encompass various alternatives (improvement models) to solve the problem.Simulation models corresponding to these improvement models are used to acquire the inputs and outputs which are represents the performance criteria.Next step is to develop the DEA-BCC model, with the improvement models serving as the Decision-Making Units (DMUs).The best improvement model with the best performance criteria will be identified after analysing the results.Figure 1 illustrates the flow of the research framework in brief.This research will be using secondary data obtained from an SME food manufacturing company located in Penang.This data is related to the food manufacturing system which produces chocolate malt.The manufacturing process for chocolate malt powder involves several stages, resulting in the creation of a high-quality product.The flowchart detailing each process within the system's operation is depicted in Figure 2. The initial stage known as the premix process; this process is to formulize the raw materials for making the product which is chocolate malt powder.In this process, only two workers are needed.This process is carried out completely using workers.All raw materials that have been formulated at the premix process are mixed at the mixing process.This process needs two workers and machines.
The machines must rotate 20 minutes per 100kg of powder that was formulized at the previous process.The time is set to get good quality chocolate malt.The powder from the mixing process is packed into 1kg packets during the filling process.This process needs two workers and machines.The final stage in the operating system is the packaging process.In this process, only one worker and a machine are needed.This is because the packaging processing time is short compared with the previous three processes.This process is to pack the 1kg packet into boxes.Each box has 12 packets of the 1kg packet of chocolate malt powder.
RESULTS AND DISCUSSION
After gathering all the required data and information, the modules were compiled and connected resulting in a flowchart view within the Arena software.Figure 3 shows the simulation model for the manufacturing system using the Arena software.The statistical distribution of the data for the simulation model of the actual manufacturing system is presented in Table 1.Following the development of the simulation model using Arena software, it undergoes a verification and validation process.Model verification is to ensure simulation model is free from logical errors.Model validation, on the other hand, ensures the simulation model can represent the actual system.The processing time for each process was used to validate the simulation model.Number in and number out were also used to validate the simulation model.The calculations of the differences can be found in Table 2.The model is valid if the percentage differences are less than 10%.Since the percentage differences are all less than 10%, then the model is valid [6].The next step involves identifying 19 improvement models based on assumptions related to the minimum, maximum, and total number of workers.The minimum number of workers is one, the maximum is three, and the total number of workers is eight.
Table 3 shows all the improvement models that have been recorded.
Simulation model of each improvement model is developed.Simulation models of improvement models are run using Arena software to obtain the inputs and outputs.The inputs were the average total production time (v1) and average number of entities in the system (v2).while the outputs were Applied Mathematics and Computational Intelligence Volume 12, No. 3, Oct 2023 [20-28] the total production (u1) and average worker utilization (u2).The inputs and outputs are shown in Table 4 and these inputs and outputs were employed in the next stage to conclude the decision of optimal worker allocation alternative.
The efficiency scores are evaluated using the DEA-BCC model.The model is output oriented to measure the efficiency scores to accomplish a maximum level of outputs to the given inputs and solved using Lingo 12.0.Thus, the output-oriented BCC-DEA model results as shown in Table 4.This study has proposed that Improvement Model 11 (IM11) is the best improvement model to be compared with the actual system as it has an efficiency score of one.Table 6 shows the comparison between the actual system and the IM11.6 illustrates that both the total production and the average resource utilization of IM11 increase when compared to the results of the actual system.Specifically, the total production and the average worker utilization show increments by 622 packets and 0.1193, respectively.Moreover, there are increases in both the average total production time and the average number of entities in the system also increases as compared to the actual system.Notably, while the average total production time increases, the total production also experiences an increase.As a result of this study, the optimal operator allocation is IM11 as the proposed modification can improve the actual system's efficiency and productivity.
CONCLUSION
In this study, nineteen improvement models for worker allocation were proposed.Utilizing the DEA-BCC model, each improvement model's efficiency was analyzed to identify the optimal solution for enhancing worker allocation within the company's manufacturing system.The results indicated that only Improvement Model 11 is efficient, achieving an efficiency score of one.Consequently, IM11 has been chosen as the best improvement model.Consistent with the simulation results, IM11 has the potential to increase both total production and average worker utilization.
Figure 3 :
Figure 3: Flowchart view of the simulation model using Arena software
Table 1 :
The distributions of the process time derived from Arena Input Analyzer
Table 2 :
Differences between simulation output and the actual data for each process
Table 3 :
Alternatives of Improvement Models for operator allocation at each process
Table 4 :
Inputs, Outputs and Efficiency Score of each alternative
Table 6 :
Inputs and Outputs of each alternative | 2023-11-15T17:28:24.593Z | 2023-10-10T00:00:00.000 | {
"year": 2023,
"sha1": "f678844b541c3c88d9057c818ada09e36e843a03",
"oa_license": "CCBYNCSA",
"oa_url": "https://ejournal.unimap.edu.my/index.php/amci/article/download/315/204",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8451dce9e814dcd7461ebd8ebe50812d0ce2d721",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Engineering"
],
"extfieldsofstudy": []
} |
151260977 | pes2o/s2orc | v3-fos-license | To Be Both (and More): Immigration and Identity Multiplicity
Immigrants and their descendants make up a growing share of the population in countries across Europe, North America, and Oceania. This large-scale immigration challenges once relatively stable notions of ethnic, national (or regional), and religious identities. Immigrants and their children confront the task of defining themselves in a new and unfamiliar context. Questions regarding immigrants’ identifications with their ethnic and national groups—but also with local, religious, and supranational groups—have animated national policy debates. This special issue brings together research on migrants’ sense of a “being both,” and the research and policy implications of this particular form of multiple identification. This introductory article discusses the conceptualisation of multiple identification, the importance of group dynamics for the adoption of dual identities, as well as the implications of identification with multiple social groups for immigrants and their receiving societies.
"I am a German if we win, but an immigrant if we lose." This statement was made by MesutÖzil, a third-generation Turkish German and one of the stars of the German national soccer (football) team that won the 2014 FIFA World Cup. Four years later, Germany was again one of the favorites to win the Cup. In the days leading up to the 2018 tournament,Özil drew heavy criticism for appearing in a photograph with Turkish President Recep Tayyip Erdogan. Many accused Ozil of a lack of loyalty to Germany and some cited the ensuing national debate as one reason that Germany failed to advance out of the first round. Following this backlash,Özil withdrew from the team.
In explaining his decision and the position that he felt he occupied in Germany, Ozil stressed that he has both a Turkish and a German heart and he denounced the discrimination and racism in Germany that would not accept this duality. Many Turkish Germans agreed and started to share their negative experiences on Twitter via "#MeTwo," whereby "Two" referred to the dual identity of people with an immigrant background. To make a choice between the one or the other identity would be impossible, it was argued, explained metaphorically as "never ask a zebra whether it is white or black" and "never ask someone to choose between a mother and a father." Ozil's experience demonstrates that multiple identities are an important and sometimes contested matter. Like all identities, they are not only a matter of who people are to themselves, but also who they are to others: in the words of Jenkins (1997, p. 3), identity is the "best device I know for bringing together 'public issues' and 'private troubles'." Immigrants' multiple identities involve how immigrants themselves think and feel about their ethnic and national group memberships, as well as the religious, local, racial, and supranational groups to which they belong; how other members of these groups (e.g., coethnics, the majority population in the host country) view immigrants and act toward them; and the balance or tension between these views (Verkuyten, 2018).
According to data collected by the United Nations (2017), approximately 258 million people are migrants, living in countries other than the ones they were born in. 1 The growing size of this population is dramatic: in 2000 the number was 173 million and in 2010 it was 220 million. The leading sending countries for contemporary immigration are India, Mexico, the Russian Federation, and China; the leading receiving country, in terms of absolute numbers, is by a large margin the 1 Within the field of migration studies, distinctions can be made between various forms of migration and categories of immigrants (e.g., short-and long-term migration, back-and-forth migration, return migration, chain migration, cross-border workers, first-and later-generation immigrants, and unauthorized migrants, asylum seekers, and refugees). Like the public, psychologists have not always understood and appreciated these distinctions. Despite their importance, we do not have the space to review them in the present article. Rather, we use the terms "immigrants" and "immigrant-heritage" (recent history of immigration and children or immigrants) in a generic sense. United States, followed by Saudi Arabia, Germany, and the Russian Federation. If the children of immigrants are considered as well, then as much as a quarter of some countries' populations can be considered to have immigrant heritage. These numbers alone point to the importance of understanding how social identities and immigration are a part of contemporary life.
In this introductory article, we review the literature on multiple identities among immigrants. We first briefly consider the conceptualization of identity and the importance of studying immigrants' multiple identities. Next, we review some of the key questions that scholars have asked about multiple identities in the psychological literature on immigrant adaptation, pointing to some of the unanswered questions in the field. We then discuss the importance of group dynamics in influencing how strongly immigrants identify with multiple groups and the (in)compatibility between them. Finally, we consider the psychological and sociocultural outcomes of immigrants' multiple identities and identify directions for future research.
Conceptualizing Identity in the Context of Immigration
The academic literature on identity is so vast and includes so many different conceptualizations that almost anything that has to do with what people think, feel, and do has been claimed to be a question of identity (see Wetherell & Mohanty, 2010). Psychological concepts such as attitudes, beliefs, worldviews, self-concept, and personality are increasingly replaced by the buzzword "identity." This combination of conceptual expansion, on the one hand, and theoretical underspecification, on the other, is endemic to the social sciences (Haslam, 2016). It can lead to so much confusion that terms lose all meaning (Verkuyten, 2018). Some scholars have gone so far as to argue that identity as a concept should be abandoned altogether (Brubaker & Cooper, 2000). Although we disagree with this recommendation, we do believe greater specification is needed.
In the context of immigration it is useful to at least make a distinction between processes of group identification that are specified by the social identity perspective, and the gradual, over time, processes of internalizing a cultural system of meaning that are conceptualized in (the development of) cultural identity (e.g., Verkuyten, 2016;Wiley & Deaux, 2010). Identifying with two groups or communities does not necessarily mean that one has a bicultural self, and having a (situational) sense of belonging and commitment to a particular ethnic and national community is not the same as developing an inner sense of self that results from a gradual process of acculturation and enculturation.
Drawing on the social identity perspective that includes Social Identity theory (Tajfel & Turner, 1979) and Self-Categorization Theory (Turner, Hogg, Oakes, Reicher, & Wetherell, 1987), multiple identities can be conceptualized as social (or collective or group) identities. Social identities tell us something about how people as group members position themselves in their social environment (and are positioned by others), as well as how they derive meaning and value from these positions (Tajfel & Turner, 1979). The social identity perspective does not answer the question "who am I, and where do I belong" in terms of internalized, individual meanings but rather in terms of characteristics and social experiences that we share with other cultural group members. The focus is on identification of the self with a group (Thoits & Virshup, 1997), whereby the self extends beyond the individual person to the group. From a social identity perspective, the question of multiple identities is less about an integrated internal structure or establishing a sense of coherence. The focus is more on how in particular contexts, specific social identities become relevant, overlap and relate to each other. Different identities can involve contrasting meanings, competing demands and different loyalties and allegiances to others.
The literature on cultural identity and its development tends to conceptualize identity in terms of an inner structure and focuses on the ways in which one's cultural group membership come to be represented as an integral part of a (developing) sense of self. The "who am I" question is answered in terms of internalized, individual meanings that develop progressively during a process of enculturation (Thoits & Virshup, 1997). For example, the research on Bicultural Identity Integration (BII; Benet-Martínez & Haritatos, 2005) examines the extent to which different internalized systems of cultural meaning are psychologically compatible or oppositional. And research on identity development examines how people who have internalized more than one culture incorporate them into their individual self-concepts in order to achieve a more or less coherent self (e.g., Syed & McLean, 2016).
Immigrants and Multiple Social Identities
Everyone is a member of a variety of categories and groups and therefore everyone has multiple social identities. Often, these identities are rather independent of each other because they refer to quite different kinds of categories, domains of life, or levels of abstraction. However, they can also intersect, combine, or conflict, psychologically and socially.
For the immigrant, questions about ethnic and national identity (as well as religious, local, and supranational group belonging) are almost inevitable and, in many cases, similar questions are raised for their descendants as well. Given the large and growing percentage of immigrants in the world today, the study of multiple identities is clearly important, both for understanding specifically how immigrants adapt to their new circumstances as well as to learn more about the general processes of identity development and change. How do immigrants position themselves with regard to their ethnic, national and religious groups and what is the relationship between these group memberships? These questions of multiple identity are at the heart of both societal and scholarly debates about how the sizable and growing immigrant-origin population can become integrated with their new society.
Living outside their countries of birth-or those of their parents or grandparents-makes the possibility of multiple identity salient to immigrants and their descendants. Given that a plurality of immigrants prefer to maintain some attachment to their country of origin at the same time that they acquire a connection to their country of residence (Berry, Phinney, Sam, & Vedder, 2006), understanding immigrants' identities requires understanding multiple identities.
The identity patterns of immigrants are also important for understanding the political attitudes and behaviors of members of the host country, for whom demographic changes related to immigration have challenged apparently stable notions of ethnic, national (or regional), and religious identities (Richeson & Craig, 2011). Majority group fears about how immigrants might change society and national culture have increased support for immigration restrictions, from narrowing access to building walls (Craig & Richeson, 2018), and they have arguably empowered right-wing political movements (e.g., Lubbers, Gijsberts, & Scheepers, 2002). Further, the implication that immigrants with multiple identities can have divided loyalties has introduced new challenges to long-standing understandings of citizenship and naturalization. The suspicion about divided loyalties has animated national policy debates and underlies bias toward immigrants who have dual citizenship (Jasinskaja-Lahti et al., 2019) or hold dual identifications (Kunst, Thomson, & Dovidio, 2018). Understanding the forms, causes, and consequences of immigrants' multiple identities, therefore, has the potential to inform these debates with evidence.
Studying Multiple Identities
Multiple social identities have been studied from a number of perspectives. Scholars have asked, for example, whether people's representation of their multiple group memberships is simple and exclusive or whether it is more complex and inclusive. They have also examined how strongly people identify with their multiple groups and how compatible the group memberships are with one another. Each of these approaches has different theoretical and empirical implications and raises its own set of questions about the nature of multiple identities. We consider some of these key questions below.
How Are Multiple Identities Cognitively Represented?
One approach focuses on the ways in which multiple identities are cognitively organized and integrated, that is, how people think about the boundaries and relationships between the various groups to which they belong. People can think of the relationship and overlap between their group memberships, for example, their ethnic and national identities, in different ways that can be placed on a continuum from less to more cognitively complex (Roccas & Brewer, 2002).
At the simplest level, some people perceive a strong overlap and interrelation among their social identities. An immigrant to the United States from Mexico, for example, may think of their ingroup as consisting exclusively of other Mexican American immigrants. At a slightly more complex level, people may compartmentalize their identities so that they recognize that they share each with a different group of people. The same immigrant may think of himself or herself as having both Mexican and American ingroups, each consisting of different people. At the highest level of complexity, people may recognize that members of each of their respective groups are both ingroup members along some identity dimensions and outgroup members along other dimensions. Again using the example of the immigrant to the United States from Mexico, she or he may feel a shared belonging with other Americans, but also see them as different because of the nonshared Mexican identity. At the same time, a sense of shared belonging with Mexican citizens may be experienced, while seeing differences because of the different geographical locations. Intersecting identities are considered the least cognitively complex because there is no differentiation between group memberships; compartmentalization is more complex, because it involves differentiation, and merged identities are considered the most complex because they involve both differentiation and integration of multiple identities.
Lower identity complexity signifies that multiple identities are embedded in a single ingroup representation (Roccas & Brewer, 2002), which increases the ingroup versus outgroup distinction that is the cognitive basis of ingroup bias (Crisp & Hewstone, 2006). As a result, immigrants with lower social identity complexity may be less open to outgroups, including the host society. Research among Turkish Muslims in the Netherlands supports this prediction. Turkish Muslims who reported lower social identity complexity (i.e., those who perceived that their ingroup members shared both their ethnic and religious group memberships) showed lower national identification, higher ingroup bias, and lower endorsement of national liberal practices (Verkuyten & Martinovic, 2012). Others have reported similar findings among Turkish-Belgium Muslims (Van Dommelen, Schmid, Hewstone, Gonsalkorale, & Brewer, 2015) and Turkish-Australian Muslims (Brewer, Gonsalkorale, & van Dommelen, 2013).
Thus, dual identities can mean quite different things to members of the same groups, with important implications for intergroup relations. For some, it means that the boundaries of the ingroup are narrowly proscribed, whereas for others they are more expansive and differentiated. Further, the research findings highlight the importance of moving beyond a focus on ethnic and national identity by including other category memberships, such as religion (or race).
How Strong Are Multiple Identities?
A second approach focuses on the strength of people's multiple identities (Fleischmann & Verkuyten, 2016;Van Dommelen et al., 2015). There are two main strategies for studying immigrants' dual identity in terms of the strength of ethnic identification and host national identification. The first focuses on the mean scores and association between immigrants' levels of ethnic group identification and their sense of belonging to the nation of settlement. This approach is similar to the acculturation literature, in which group identifications are considered to be independent from each other (Berry, 2001;Hutnik, 1991) and thus multiple combinations of high and low attachment to both the ethnic and the national communities are possible. Cross-national research indicates that in most countries immigrants tend to have a weaker identification with the new nation than with their ethnic heritage, and a weaker national identification compared to the majority group (Elkins & Sides, 2007;Fleischmann & Phalet, 2018;Staerklé, Sidanius, Green, & Molina, 2010).
Focusing on the combination of two separate group identifications (German and Turkish; Mexican and American) may not always adequately capture the subjective meaning of dual identity (Turkish German, Mexican-American). It is difficult to know whether people with both a strong ethnic and national identification actually experience this pattern as a dual identity. The latter might have different psychological meanings and different social consequences from the former (Hopkins, 2011). Therefore, and as a second approach, one can also focus directly on the strength of duality identification (feeling "Turkish German," or "Mexican American"). However, in the absence of additional information about migrants' cognitive representations of both group memberships and the extent to which the combined identity is recognized as a social group in the context under study, the use of such a direct dual identification measure raises the question what exactly a high and also a low score on such a measure means. For example, a low dual identification score might indicate a lack of identification with both groups or rather a low level of host national identification against the backdrop of a strong ethnic identification. And a high score might indicate a strong identification with both groups or rather a qualified from of a strong ethnic identification to which a sense of host national belonging is added (Fleischmann & Verkuyten, 2016).
Whether the strength of identification with a combined category (e.g., African American, British Muslim) captures something else than the strength of identification with its component identities (African and American, British, and Muslim) will depend, among other things, on the question whether the combined category is perceived by its members and society as a separate category. In some contexts, the combination of two distinct social identities is recognized as coming together in a specific category for which unique group labels are created, such as Chicano (Mexican Americans). However, in many immigration contexts, the political and historical context and the prevailing social representations of national and ethnic identities do not leave much room for "being both" and combinations such as "Turkish French" or "Vietnamese German" are not common, and terms that express these combined identities are rare or perceived as awkward. In such contexts, it is not clear what is captured by a measure such as "I feel strongly Vietnamese German." In conclusion, the meaning of the strength of identification with a dual identity is likely to depend on immigrants' cognitive representations of their ethnic and national group memberships, and on the extent to which the societal context recognizes dual identities.
Are Dual Identities Compatible with One Another?
Immigrants not only face the question of how strongly to maintain a sense of ethnic group belonging, but also the challenge of developing a sense of belonging to a new society. These dual objectives can involve the difficult task of reconciling group belongings and commitments, as well as combining contrasting moral world views and normative expectations (e.g., Benet-Martínez & Haritatos, 2005). Neurological research has demonstrated that stress and psychological conflict often result (Hirsh & Kang, 2016) when immigrants experience their ethnic and host national identities as being incompatible or in opposition to each other.
Across different countries with different immigration histories, the levels of identification and the association between national and ethnic identification can vary. In a cross-national study of immigrant youth (Berry et al., 2006), for example, a positive association between ethnic and national identification was found in settler societies such as Australia, Canada, New Zealand, and the United States, whereas there was a negative association in traditionally nonsettler countries, including Germany, the Netherlands, France, Norway, Sweden, and Portugal. The evidence that a pattern of negative rather than positive associations is more common in Western Europe than, for example, in North America, suggests immigrants' ethnic group identification is not easily reconciled with a strong sense of belonging to European nations. Furthermore, research on national dis-identification indicates that a substantial number of Turkish people in the Netherlands explicitly distance themselves from, and do not want to be identified with, their host nation (Verkuyten & Yildiz, 2007). The level of dis-identification with the Netherlands was found to be higher when Turkish immigrants identified more strongly with their ethnic and religious (Islam) minority community (Maliepaard & Verkuyten, 2018).
How Do the Meanings of Multiple Identities Vary?
The majority of research on multiple social identities has focused on easilyassessed measures of the level of identification with ethnic and national groups, with relatively little work devoted to exploring other possible dimensions of evaluation and representation, nor to the meanings that might be associated with the various groups. Yet it is precisely these latter questions, dealing with functions, values, and meanings, that need to be understood if the research on immigrant identity is able to contribute to policy and practice.
Consider the different functions that an identity might fulfill for the immigrant. Immigrants may feel, as one example, emotionally involved in their ethnic community while having a more instrumental view toward their new nation and their belonging to the new society. Some aspects of a strong ethnic identity are more easily reconciled with belonging to the new nation than are others (Snauwaert, Soenens, Vanbeselaere, & Boen, 2003;Verkuyten & Martinovic, 2012). To the extent that these two identities are enacted at different times and in different spaces, it may be relatively easy to endorse each. Alternatively, one might join or construct social networks that include members of both the coethnic and conational group, thus perhaps allowing the two functions to be fulfilled within the same social space. (See papers by Love and Levy, and by Repke and Benet-Martínez, 2019).
When it comes to loyalties, values and worldviews, a sense of compatibility between multiple group identifications is often much more difficult to achieve. For example, to the extent that immigrants adhere to morally traditional and patriarchal beliefs, these beliefs will be less compatible with a sense of belonging to countries in Western societies that emphasize liberal values including gender equality and sexual minority rights (Eskelinen & Verkuyten, 2018;Maliepaard & Verkuyten, 2018). Similarly, identity incompatibility is more likely if the behavioral implications of the two group identities are contradictory (Hirsh & Kang, 2016), which could be the case, for example, for immigrant Muslim youth at an age when alcohol use becomes common among their peers, and youngsters have to make a choice between following the behavioral norms of their religious or conational ingroup. At the same time, qualitative exploration of how Muslims in England define their identities shows that in some cases immigrants can create their own concept of what it means to be a Muslim in Britain that will allow their religious values and behaviors to be incorporated into the meaning of being British (Hopkins, 2011). And youth can creatively interpret and reinvent cultural meanings in developing novel combined identities which allow them to negotiate their sense of societal belonging (Ketner, Buitelaar, & Bosma, 2004;Wiltgren, 2017). These example suggest that national identification can encompass and emphasize different aspects (political, historical, geographical) and different dimensions, and people within the country can have quite different understandings of what it means to be a national (e.g., ethnic, civic, cultural, cf. Reijerse, Van Acker, Vanbeselaere, Phalet, & Duriez, 2013). Immigrants, for example, can identify with the host country and its institutions (Germany, the Netherlands) or its core narrative ("American dream"), but not with the majority population (Germans, Dutch, White Americans; Van der Welle, 2011). Unfortunately, very little research has examined what the host nation means to immigrants and how they reason about national belonging. In one study among young Moroccans in the Netherlands (Olmo, 2011) five main reasons for feeling Dutch emerged: being born in the country (soil principle), being raised in the Netherlands (cultural principle), having one's future in the Netherlands (future principle), contributing to the country (participation principle), and feeling emotionally attached to the Netherlands (emotion principle). Thus, second-generation immigrants can self-identify as a host national because they were born and raised in the country in which they imagine their future without identifying with the majority group or having a sense of belonging, commitment and loyalty to that group.
Similarly, little research among immigrants has considered different dimensions of identification with the novel nation and the immigrant-heritage group. Most approaches to social identities distinguish between different identity components or dimensions, such as private and public regard, cognitive centrality, commitment, importance, and values and beliefs (see Ashmore, Deaux, & McLaughlin-Volpe, 2004;Roccas, Sagiv, Schwartz, Halevy, & Eidelson, 2008;Umaña-Taylor et al., 2014). A distinction between different dimensions allows for a more detailed understanding of immigrants' multiple identity and its different meanings, and for examining how variation along these dimensions is related to functioning and behavior. For example, the distinction between dimensions makes it possible to conceptualize group identification in terms of profiles (Deaux, 2006;Roccas et al., 2008;Wiley & Deaux, 2010) and to differentiate between immigrants who have a more homogeneous or heterogeneous pattern of ethnic group identification.
Immigrants with a homogeneous identification profile express similarly high levels of ethnic identification across different dimensions, making one identification score sufficient for capturing the extent of their ethnic identification. In that case, the different aspects are experienced as closely going together, such that high ethnic group importance also means adherence to ethnocultural beliefs, a sense of attachment to the ethnic community, engagement in ethnic practices and the behavioral enactment of one's ethnic identity. It is also possible, however, for immigrants to have more heterogeneous identification profiles whereby their endorsement of some dimensions would be strong while scoring relatively low on others. For instance, ethnic self-identification can be strong despite an acknowledged lack of cultural knowledge and practice. In the United Kingdom, Hutnik (1991) found that ethnic ingroup identification does not necessarily coincide with ethnic cultural preferences and behavior. An individual can identify herself predominantly in ethnic terms, even though she has made important cultural adaptations in order to live effectively in the host society. Ethnic self-identification may be relatively independent of styles of cultural adaptation and behavioral enactment. And whereas a high level of religious group identification is ubiquitous in adult and adolescent Muslim samples in Europe, there is great variation when it comes to specific religious practices (e.g., Phalet, Fleischmann, & Stojcic, 2012).
Heterogeneous profiles indicate that immigrants differ not only in the extent of their ethnic group identification but also in the meaning of their group belonging. Immigrants with similar overall levels of identification can have a different profile, which makes it difficult to compare meaningfully their levels of group identification. Heterogeneous profiles may also lead to intragroup disagreements and debates. Two immigrant women might have a similar sense of belonging to their ethnic community but whereas for the one this would imply particular values and ways of behaving (e.g., speaking the language of origin), that might not be true for the other. Such differences can lead to strong debates about what it means to be a "true" ethnic or religious group member and about the acceptability of different ways to achieve that status (Hoekstra & Verkuyten, 2015).
Group Dynamics Associated with Multiple Identities
Multiple identities are prevalent among immigrants, but the specific intergroup and intragroup processes in which they participate can make the various social identities more or less relevant and compatible. In looking more closely at these group dynamics, we are able to consider not only the "private troubles" but also the "public issues" involved in multiple identity issues (Jenkins, 1997).
The Receiving Society
Research on dual identity explicitly or implicitly concerns the impact of the broader society on the individual. The context of reception (Portes & Rumbaut, 2006) within the community at large may be more or less supportive of an immigrant's attempt to incorporate a new national identity. To the extent that majority members see ethnicity as an essentialist element of national identity, for example, the immigrant who deviates from that standard may be considered to be outside the acceptable limits (Pehrson, Brown, & Zagefka, 2009;Wright, Citrin, & Wand, 2012). The definition of what it means to be a "true national" will thus have repercussions for immigrants' ability to reconcile national belonging with an ethnic identity. The classic distinction between national identity content in terms of ethnic versus civic definitions (cf. Brubaker, 1992) has been complemented with a cultural definition, such that sharing core cultural traits like the national language and also a Christian heritage, is regarded by some majority members as necessary conditions to claim national belonging (Reijerse et al., 2013). The refusal by some European countries, in the height of the European "refugee crisis," to admit refugees due to their Islamic religion is a recent example of how exclusionary definitions of national identity content engenders identity incompatibility by making national group boundaries impermeable for immigrants in general, and Muslim immigrants in particular. Efforts by the Trump administration to ban immigration to the United States from primarily Muslim countries reflects a similar stance.
Numerous studies have documented the negative impact of discrimination experiences on dual identity. Among Chinese Americans in the United States, for example, perceived discrimination was associated with greater conflict between identity groups (Benet-Martínez & Haritatos, 2005, see also Miller, Kim, & Benet-Martínez, 2011). In a study in Canada, higher perceived discrimination predicted lower dual identity integration through greater stress, whereas lower discrimination was related to lower stress and thereby to greater identity integration (Yampolski & Amiot, 2016). Further, results of a four-wave longitudinal study among newly arrived international students showed that greater discrimination led to the predominance of one identity over others (Amiot, Doucerain, Zhou, & Ryder, 2018). Discrimination and rejection from members of the host country can make it harder for immigrants to acquire a new national identity (e.g., Bobowic, Martinovic, Basabe, Barsties, & Wachter, 2017;Wiley, Lawrence, Figueroa, & Percontino, 2013). In some cases, discrimination and rejection can also strengthen immigrants' ethnic identities or present them in a new light (Bobowic et al., 2017;Verkuyten & Yildiz, 2007). Across several European societies, Muslims who perceived more instances of discrimination or saw more anti-Islamic attitudes in their receiving country identified more strongly with their religious community and displayed lower levels of identification with, or even dis-identification from, the nation of residence (Fleischmann & Phalet, 2016;Kunst, Tajamal, Sam, & Ulleberg, 2012;Verkuyten & Yildiz, 2007). In contrast, national identification of (Muslim) immigrants tends to be stronger in European societies with more multicultural policies (Igarashi, 2019).
The Ethnic Immigrant Community
A focus on factors related to the broader society often ignores important processes within immigrant communities that influence dual identity. However, for minority group members, as pointed out by Tajfel (1978, p. 327), "identity is simultaneously determined by the socially prevailing views of the majority and by the psychological effects of their own culture and social organization" and "a person's idea about himself or herself is at least as much (and probably much more) dependent upon continuous and daily interaction with individuals from the same social group" (p. 328). Research among immigrant-heritage groups has clearly shown that they often prefer to compare themselves to coethnics rather than with the majority group (e.g., Abbey, 2002;Leach & Smith, 2006;Zagefka & Brown, 2005). Differences and similarities within one's own cultural or religious minority community get much more attention in daily life and are much discussed. People make comparisons between subgroups within their own ethnic minority community: for example, they may compare themselves with recently arrived coethnic immigrants, looking down on them or feeling ashamed of them because the behavior of these newcomers might reflect badly upon them, and as a result try to distance themselves from them (Kumar, Seay, & Karabenick, 2015). Similarly, immigrants may make within-group distinctions between those with darker versus lighter skin color, between those from urban versus rural regions in the country of origin, or between orthodox and more liberal Muslim immigrants. These examples indicate that ethnic group membership involves crucial issues of ingroup acceptance and support as well as ingroup obligations and pressures (see Cárdenas, 2019). The immigrant community itself can provide support for maintaining one's ethnic identity and/or discourage the too rapid adoption of new national identities (Badea, Jetten, Iyer, & Er-rafiy, 2011;Wiley, 2013).
In addition, many identity issues for immigrants pertain to inter-minority and transnational comparisons. When Latinx immigrants (e.g., Colombians) to Spain experience discrimination, for example, they can favorably compare their ingroup to a relatively lower-status group, such as African immigrants (Madi, Bobowik, & Verkuyten, 2019;Sevillano, Basabe, Bobowik, & Aierdi, 2014). Furthermore, political events in the country of origin and the emergence of transnational and diaspora communities may influence how immigrants define and locate themselves in the host society (Kurien, 2018). In implicitly using the nation-state as the unit of analysis, we may fail to consider the wider transnational field of concerns and actions of many immigrants. For example, immigrants' acculturation orientations are not only determined by the perceived rejection in the country of settlement, but also by the perceived rejection from family and friends in the country of origin (Badea et al., 2011;Perkins, Wiley, & Deaux, 2014;Wiley, 2013).
Implications of Multiple Identities
The importance of considering immigrants' multiple identities can be considered in relation to their psychological adaptation and sociocultural adaptation (Ward, Bochner, & Furnham, 2001). The former refers to one's well-being and the processes of coping with the stress of migration and intercultural transition. The latter relates to the cultural learning process and the effective functioning in the new society. Furthermore, dual identity can be instrumental in trying to act collectively for addressing societal disadvantages and inequalities.
Psychological Adaptation
There are good reasons to believe that dual identities have psychological advantages, based on acculturation theory and the social identity perspective. In general, dual identity has psychological advantages for immigrants and ethnic minorities over identification with just one component identity (see Berry et al., 2006;Dimitrova et al., 2017;Nguyen & Benet-Martínez, 2013). Maintaining a sense of ethnic belonging has a positive effect for the immigrant's wellbeing; in addition, the acquisition of meaningful new group memberships and the consequent increase in the number of group identifications can improve psychological well-being (Greenaway et al., 2015). In a study among six recent immigrant groups in the Netherlands, dual identifiers felt more at home and were happier than immigrants who identified primarily with a single identity (Fleischmann & Verkuyten, 2016). In another Dutch study of adults with either a Turkish or Moroccan background, dual identifiers showed higher levels of psychological well-being (life satisfaction), and lower levels of negative states (depression, emotional loneliness, and social loneliness) across a 3-year period than did immigrants with a single group identification (Zhang, Verkuyten, & Weesie, 2018).
Yet apart from the difficult question of causality and the likelihood of mutual influences between group identifications and well-being, we should be careful not to present too rosy a picture: trying to develop and maintain a dual identity can be challenging and stressful. It can involve the difficult task of reconciling group belongings and loyalties, combining contrasting cultural world-views and normative expectations, and maintaining multiple social networks, all of which can induce stress and create psychological conflict (Hirsh & Kang, 2016;Rudmin, 2003). A person can experience their different identities as being incompatible or in opposition to each other (Benet-Martinez & Haritatos, 2005). Unfavorable consequences can include a lowered sense of belonging, divided loyalties, the perception of contrasting values, contradictory behavioral expectations that prevent people to enact their two identities, as well as conflicts between and within diverse social networks. The experience of incompatibility between previously formed and new social identities is probably common and unavoidable for many immigrants, and the result can be of "feeling neither" rather than a feeling of "being both." To be both can be psychologically challenging and can be socially challenging as well. Minority members can struggle with the question of what they are, who they belong to, and how they should feel and act. The questioning or denial of one's dual identity by others can make people feel further restricted in their identity choices, resulting in negative affect and lower well-being (Albuja, Sanchez, & Gaither, 2019;Sanchez, 2010;Wang, Minervino, & Cheryan, 2013). Both coethnics and majority members might refuse to validate a dual identity claim. Coethnics might think the immigrant is disloyal in wanting to be part of the majority, while majority members might regard the same immigrant as too ethnic to be incorporated in the national group. The resulting conflicts almost inevitably have some negative implications for well-being (see Settles & Buchanan, 2014).
Sociocultural Adaptation
Sociocultural adaptation requires cultural learning to function and achieve one's goals in the new society. It covers such facets as general adjustment, social interaction adjustment, and work adjustment and is acquired via social learning (from host society members) and learning generalization (Wilson, Ward, & Fisher, 2013). The beneficial aspects of a dual identity for sociocultural adaptation have been documented with increasing frequency (Nguyen & Benet-Martínez, 2013), including evidence for greater cognitive flexibility and the ability to adjust to and function well within two different cultural contexts (Kang & Bodenhausen, 2015). For example, the acculturation complexity model (Tadmor & Tetlock, 2006) proposes that the repeated attempts of dual identifiers to resolve cultural discrepancies gradually improves their cognitive ability to acknowledge, accept and integrate competing perspectives on the same issue (see also Benet-Martínez, Leu, Lee, & Morris, 2002). These cognitive advantages are not limited to culturally specific tasks, but can also extend to thinking about noncultural issues (Tadmor, Tetlock, & Peng, 2009) and to greater creativity (Goclowska & Crisp, 2014). Furthermore, identifying with two groups and living with and within (rather than between) two cultures is thought to lead to a reflexive attitude that enables a critical and innovative view of groups and cultures. A dual position can result in a broader horizon, a sharper view of social relationships, and the ability to act as an intermediary (broker) in attempts to bridge the gap between different ethnic and cultural groups (for example, Kang & Bodenhausen, 2015;Levy, Saguy, van Zomeren, & Halperin, 2017;Love & Levy, 2019).
Collective Action
According to the social identity perspective, social identities do not only reflect the social context in which people find themselves but are also instrumental in trying to change the context. Social identities provide a shared sense of "us" which gives unity and direction and therefore is an important collaborative social force that can try to achieve identity-related goals such as shown in the civil rights struggle, and other struggles for ethnic-racial equality and justice around the world. For example, among Muslim minority youth in Europe, identification with Islam and religious youth organizations form the basis for collective action and protest against inequality and exclusion (Cesari, 2003).
Researchers have also examined the association between dual identity and political outcomes among immigrants and their descendants. In a series of studies, Simon and colleagues (e.g., Simon & Grabow, 2010;Simon & Ruhs, 2008) found the dual identification-measured directly as a blending of ethnic and national identities-increased normative forms of politicization among Turkish and Russian immigrants to German, above and beyond the effects of each constituent social identity. However, this relationship seems to depend on the particular societal conditions and the psychological nature of the dual identity Simon and colleagues (e.g., Simon & Grabow, 2010;Simon & Ruhs, 2008) ground the association in their model of Politicized Collective Identity (PCI; Simon & Klandermans, 2001). They argue that ethnic identification gives immigrants reason to challenge grievances against their ethnic group and that national identification entitles immigrants, as part of a national community, to have those grievances addressed. However, those same ideologies and policies (i.e., assimilationist beliefs, anti-Latino policies) that would count as grievances against immigrants' ethnic groups have also been found to diminish dual identifiers inclination to support collective action for addressing those grievances (Verkuyten, 2017;Wiley, Figueroa, & Lauricella, 2014). Immigrants with dual identities may perceive the boundaries between their ethnic and national groups as relatively permeable. According to Social Identity Theory (e.g., Tajfel & Turner, 1979), they may therefore be willing to express their twoness publicly-through collective action, for example-when it is relatively safe to do so. Under conditions that are more threatening to their ethnic group, however, those who perceive intergroup boundaries as relatively permeable may be more likely to withdraw from the public arena than to redouble their efforts at protest.
Related to the nature of the dual identity, research in Germany (Simon, Reichert, & Grabow, 2013) found that Russian and Turkish migrants with dual identities who considered their dual group memberships to be in conflict expressed more sympathy for "radical" (i.e., violent, illicit, or destructive) forms of political action. Furthermore, immigrants' dual identities might influence how they participate politically on other grounds. Research in Germany and the Netherlands found that those with strong ethnonational dual identities were less likely to mobilize on religious grounds, because they identify less strongly with their Muslim group (Martinovic & Verkuyten, 2014). Research in the United States demonstrates that Mexican Americans who identify with both cultural groups adopt a more liberal ideology, which is currently predominant in the Democratic political party support of multiculturalism and the reduction of social inequality (Naumann, Benet-Martìnez, & Espinoza, 2017).
This Special Issue
The important implications of dual identities along with the conceptual diversity and confusion that is apparent in theorizing and empirical research on identity multiplicity in the context of immigration, inspired us to organize a small group meeting devoted to these questions. The 2017 Joint Meeting of the European Association of Social Psychology (EASP) and the Society for the Psychological Study of Social Issues (SPSSI) was held in September and brought together researchers from Europe and the United States. The meeting was cosponsored by the International Society for Political Psychology (ISPP) and Utrecht University.
The current issue of the Journal of Social Issues showcases some of the empirical and conceptual contributions that were presented during the meeting. Across ten contributions, we aim to advance the understanding of identity multiplicity in the context of immigration by offering new conceptual approaches, reviews of existing research lines, as well as new empirical findings regarding the ways in which immigrants or persons of immigrant origin relate to their multiple social identities, in relation to their social ties and experiences in the receiving society, and with regard to important outcomes such as well-being, academic performance and politicisation. Some of the contributions are the result of new collaborations that were set up during the meeting, and contributors include more junior and more senior researchers from both sides of the Atlantic.
The contributions are organized around two key themes. The first addresses the relation between social networks or, more generally, contact with ingroup and outgroup members, and identity multiplicity. This section contains two contributions offering new conceptual approaches (Love & Levy, 2019;Repke & Benet-Martínez, 2019), one review of research using social network analysis in relation to migrants' identification (Leszczensky, Jugert, & Pink, 2019), and one empirical article about the mediating role of identity-related cognitions in the association between contact and acculturation (Sixtus, Wesche, & Kerschreiter, 2019). The second section focuses on the implications of identity multiplicity for a broad range of outcomes. The first two contributions are concerned with adjustment or well-being and address the role of multiple identification in terms of making individuals vulnerable to identity threats, particularly in the form of identity questioning (Albuja, Gaither, & Sanchez, 2019), but also for coping with perceived discrimination at the individual and group level (Balkaya, Cheah, & Tahseen, 2019). The third contribution focuses on academic achievement among immigrant-origin pupils in secondary education and sheds light on the conditions under which embracing an integrated identity is beneficial for performance, and when this is costly (Baysu & Phalet, 2019). The fourth contribution examines threat to dual identity that originates from the coreligious ingroup rather than from the receiving society, and tests how this threat differently affects the support for group rights among two branches of Muslims with a Turkish migration background (Cárdenas, 2019). The issue is concluded with a contribution that discusses the implications of the contributions for research and policy (Wiley, Fleischmann, Deaux, & Verkuyten, 2019).
The empirical contributions cover a broad range of immigrant-receiving contexts and immigrant groups. Two of the empirical contributions draw on U.S.-based samples of immigrants or their descendants (Albuja et al., 2019;Balkaya et al., 2019). Both are diverse in terms of their ethnic or racial composition, but the second focuses more narrowly on Muslim Americans as an important religious minority group in public debates about the compatibility of religious minority with Western national identities. The social network research that is reviewed by Leszczensky and colleagues (2019) has been conducted in a range of countries and is based on complete networks of school classes or entire schools, and thus reflects the high diversity in terms of ethnicity, religion and immigrant generation that is present in today's classrooms. In contrast, the samples in three other studies target specific groups and contrast Hungarian Christians with Palestinian Muslims in Germany (Sixtus et al., 2019), thus comparing groups with different levels of cultural distance toward the German receiving society as well as different legal positions. For their analysis of academic performance, Baysu and Phalet (2019) sampled students of Turkish and Moroccan origin as most deprived ethnic minorities in the Belgian context. Finally, Cárdenas (2019) compares Sunni and Alevi Muslims of Turkish descent who are living in Germany and the Netherlands. Together, these studies provide insights into identity multiplicity among a broad range of immigrant groups, and also reflect the focus on Muslim minorities in Western receiving societies that is characteristic of the research literature and societal debates on both sides of the Atlantic. | 2019-05-13T13:06:03.397Z | 2019-04-26T00:00:00.000 | {
"year": 2019,
"sha1": "44db0f44bc2585eb06457855974b160b77e5b48a",
"oa_license": "CCBY",
"oa_url": "https://spssi.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/josi.12324",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c6e6736f730513f38a644419c555a144688efa52",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
7199780 | pes2o/s2orc | v3-fos-license | Molecular Characteristics in MRI-Classified Group 1 Glioblastoma Multiforme
Glioblastoma multiforme (GBM) is a clinically and pathologically heterogeneous brain tumor. Previous studies of transcriptional profiling have revealed biologically relevant GBM subtypes associated with specific mutations and dysregulated pathways. Here, we applied a modified proteome to uncover abnormal protein expression profile in a MRI-classified group I GBM (GBM1), which has a spatial relationship with one of the adult neural stem cell niches, subventricular zone (SVZ). Most importantly, we identified molecular characteristics in this type of GBM that include up-regulation of metabolic enzymes, ribosomal proteins, and heat shock proteins. As GBM1 often recurs at great distances from the initial lesion, the rewiring of metabolism, and ribosomal biogenesis may facilitate cancer cells’ growth and survival during tumor progression. The intimate contact between GBM1 and the SVZ raises the possibility that tumor cells in GBM1 may be most related to SVZ cells. In support of this notion, we found that markers representing SVZ cells are highly expressed in GBM1. Emerged findings from our study provide a specific protein expression profile in GBM1 and offer better prediction or therapeutic implication for this multifocal GBM.
INTRODUCTION
Glioblastoma multiforme (GBM), a devastating disease with limited therapeutic options, is a highly aggressive brain cancer characterized by uncontrolled proliferation, resistance to cell death, robust angiogenesis, and vascular edema. Integrated genomic analysis has identified mutations in distinct types of GBM including (1) TP53 and isocitrate dehydrogenase1 (IDH1) in proneural tumors; (2) NF1in the Mesenchymal subgroup; (3) histone 3.3 in pediatric GBM; and (4) EGFR amplification in classical GBM tumor (1). Microarray expression profiling has delineated genes associated with tumor grade and progression as well as resembling processes to those that regulate neurogenesis (2). Thus, the stem/progenitor cells existing in the subventricular zone (SVZ) of adult neurogenic niche are suspected to give rise to GBM. Indeed, the heterogeneous nature of GBM manifests in mixed cell types within the tumor, including a subpopulation known as glioma stem cells (GSC) (3,4). Additionally, the gene expression signature of GSC resembles those of embryonic stem cells (ESC) and neural stem cells (NSCs), suggesting GSC share features with nonneoplastic stem cells. The similarity among GSC, ESC, and neural stem cell (NSC) provides insight into their common stem-like behavior in terms of self-renewal, phenotype, and relevant signaling pathways (3,(5)(6)(7)(8)(9). Controversially, a recent report suggested that this type of brain tumor could also develop through reprograming of mature cells into progenitor-like cells by oncogenic factors (10). Independent of these hypotheses, a previous study by MRI for the spatial relationship of the contrast enhancing lesion (CEL) with the SVZ and cortex has revealed that group 1 GBM (GBM1) contacts the SVZ intimately and recurs at great distances from the initial lesion (11). Since the SVZ harbors cells with great proliferative potential and the microenvironment within SVZ is permissive to growth and proliferation, this neurogenic niche is suspected to be a vulnerable site for the origin of subtypes of GBM.
Mutation and gene expression profiling hold promise for GBM classification, but such profiling is not performed routinely in the clinical setting. Usually, patients with GBM are diagnosed and classified based on MRI features (11). However, the molecular characteristics underlying MRI-classified GBM, such as SVZ-associated GBM1, remain to be determined. In this study, we focus on the identification of aberrant protein expression in GBM1. As GBM1 is known to have recurrent tumors at locations distant from the initial lesion, we found that Annexin A2, a tumor-associated protease which plays a critical role in tumor invasion, is abundant in GBM1. Importantly, several highly expressed proteins in GBM1 are linked to metabolism and ribosomal biogenesis, indicating that metabolic components are activated to support cancer cell growth and survival. Additionally, we found that c-Myc oncoprotein is highly expressed in GBM1. c-Myc is known to regulate cell growth and proliferation through stimulation of ribosomal biogenesis (12- 16), and perhaps c-Myc overexpression in GBM1 enhances www.frontiersin.org rRNA synthesis to drive tumor cell growth. Taken together, as this malignancy progresses, the growing tumor with increased nutrient demands must use metabolic reprograming to maintain growth and proliferation. Our findings suggest the potential to exploit corrections to cancer metabolism for GBM1 therapy.
RESULTS
To uncover the molecular characteristics of MRI-classified GBM1, we undertook a proteomic approach to detect aberrant protein expression specifically in GBM1. GBM1 specimens were provided from the UCSF Neurosurgery department/brain tumor tissue core and CHTN/NCI (tumor and control region is depicted in Figure 1A). A modified version of our Microwave and Magnetic (M 2 ) proteomics method was employed for these studies to semiquantitatively compare relative protein abundance for specimens from GBM1 vs. normal brain region. Briefly, proteins that were highly expressed in GBM1 compared to normal brain region were inferred from the confidence (probability-based Mascot score), in that top-ranked amino acid sequences could be assigned to MS/MS spectra of tryptic peptides cleaved from top-ranked proteins. In parallel proteomic analysis, an alternative method by Arg-C digestion and Orbitrap Elite mass spectrometry was applied to independent sets of specimens. Proteins identified to be aberrantly abundant in GBM1 on the top-ranked list are summarized in Table 1.
Compelling evidence has shown that human GBM is a heterogeneous tumor composed of tumor cells and a portion of cancer stem cells (also called tumor-initiating cells), which share common features with normal NSCs. These adult NSCs with astrocytelike characteristics in human SVZ display markers of GFAP and vimentin (17). In support of this notion, we found both GFAP and vimentin are highly expressed in GBM1 compared to correlated brain region from normal human specimens through proteomic screening ( Figure 1B; Table 1). In addition, by using western blot for independent specimens, we further validated that GFAP and the neuroblast marker -doublecortin (DCX) are highly expressed in GBM1 ( Figure 1C). Consistent with previous study that DCXpositive cells are abundant at birth but decline rapidly within the first 2 years of human life (18), we also found that DCX level is very low in control region ( Figure 1C). However, DCX was elevated in GBM1 specimens (Figure 1C), implicating a potential signature of GBM1. Although our current result is not direct evidence showing that GBM1 arose from SVZ, notably, GBM1 tumors harbor undifferentiated SVZ cells. Importantly, proteins with known roles in energy metabolism and ribosome biogenesis were identified to be highly expressed in GBM1 compared to correlated normal brain regions (Table 1; Figure 2). As a growing tumor must meet energetic and biosynthetic demands to survive environmental fluctuations in nutrients availability, cancer cells dramatically alter their metabolic circuitry (19). Thus, these proteins associated with Table 1, was highly expressed in GBM1 vs. normal brain region specimens. The annotated MS/MS spectrum shown illustrates the amino acid sequence assignment of product ions to the top-ranked tryptic peptide, VDFSLAGALNAGFK, which spans amino acid residues 50-63 of GFAP. The insert shows the amino acid sequence coverage of GFAP with tryptic peptides observed in bold (and peptide 50-63 underlined). (C) Abundant level of GFAP and doublecortin (DCX) in independent GBM1 specimens by western blot. Table 1 | Summary of selected highly expressed proteins for GBM1 vs. normal brain region specimens: representative semi-quantitative data for top-ranked proteins and their top-ranked peptides includes: the Trembl protein database accession symbol (prot_acc), protein description (prot_desc), probability-based protein database searching score (prot_score) for GBM/normal, peptide score (pep_score), peptide expectation value (pep_expect), and peptide sequence (pep_seq). possible fragments of the identified peptide. Those values highlighted in red and blue correspond to b-ion and y-ion fragments, respectively, found in the tandem mass spectrum. The bottom portion of each frame shows the tandem mass spectrum for each identified peptide. Red and blue colored peaks correspond to predicted b-and y-ions that were identified in the spectra.
Top-ranked protein evidence
metabolism and ribogenesis are up-regulated to support enhanced growth and proliferation in order to survive periods of metabolic stress. We also found that two heat shock proteins (HSPs) including 71 kDa HSPA8 and HSP-beta1 were elevated in our proteomic screening ( Table 1). HSPA8 is induced by different stress signals to promote cell survival (20), whereas the role of HSP-beta1 in cancer is not clear. In addition, the tumor-associated proteases play an important role in tumor migration through degradation of extracellular matrix (ECM) (21,22), and we found that Annexin A2, a member of family of tumor-associated proteases, is highly Frontiers in Oncology | Cancer Molecular Targets and Therapeutics expressed in GBM1 ( Table 1). Previous reports in cell culture system have demonstrated that knock-down of AnnexinA2 inhibits glioma cell invasion, suggesting its potential as a GBM1 therapeutic target (23,24). Through our semi-quantitative proteomic approach, abnormal accumulation of several proteins involved in ribosomal biogenesis was identified as a signature of GBM1. Taking this into account, previous studies have demonstrated that c-Myc, a basic helix-loop-helix-zipper (bHLHZ) transcription factor controls cellular growth through regulation of ribosomal biogenesis (14,(25)(26)(27)(28)(29). Intriguingly, in a parallel study, we found that c-Myc is expressed in the adult SVZ. The SVZ contains slowly dividing NSCs, known as type B cells, with astrocyte-like morphology. These type B cells give rise to transit-amplifying C cells, which then generate immature neuroblasts (A cells). These neuroblasts coalesce in the rostral migratory stream (RMS) and then generate interneurons in the olfactory bulb (30,31). In adult mouse SVZ, the majority of c-Myc expression co-localizes with Mash1 and DCX, which label transit-amplifying C cells and neuroblasts, respectively (Figure 3). Anti-mitotic treatment via the infusion of cytosine-β-d-arabinofuranoside (Ara-C) into adult brain eliminates these fast dividing progenitors and neuroblasts in the SVZ but leaves slowly dividing stem cells (B cells) unaffected (32,33). We applied this treatment to validate the c-Myc expression pattern in SVZ. Notably, the population of c-Myc positive cells was substantially diminished after Ara-C treatment (Figures 3F,G). Because the Ara-C experiment cannot be performed in human or non-human primates, we applied this treatment in adult mouse brain to reveal that c-Myc is highly expressed in DCX-positive population within SVZ, which has intimate contact with GBM1 tumor.
This intriguing finding in mice and the fact that c-Myc is involved in etiology of different types of cancer (34,35) prompted us to examine whether c-Myc is involved in tumors associated with this germinal niche. To this end, we examined c-Myc abundance in independent specimens from GBM1 and different groups of MRI-classified GBM. We found elevated c-Myc levels specifically in GBM1 (Figures 4A,B). The Myc protein family is comprise of C-, N-, and L-Myc (36)(37)(38)(39)(40). However, we did not find overexpression of N-myc and L-myc in GBM1 (data not shown), suggesting c-Myc has a distinct role in GBM1 tumorigenesis. As GBM1 tumors contain undifferentiated SVZ cells including DCX-positive neuroblasts (Figure 1C), we further showed that c-Myc is abundant in the DCX-positive population in GBM1
www.frontiersin.org FIGURE 4 | c-Myc level is elevated in type I GBM. Western blot analysis shows elevated levels of c-Myc in independent sets of GBM1
specimens when compared to other types of GBM (groups II, III, and IV) and control tissue specimens. (A, B) Control specimens were from non-cancer donors that were regional and age matched to the MRI characterized GBM specimens. γ-tubulin was used as internal control. (C) c-Myc level is abundant in DCX-enriched population from GBM1 specimen. specimen (Figure 4C), offering a specific protein expression profile for the putative cancer initiating cells. Since GBM1 has multifocal phenotype and c-Myc is preferentially expressed in SVZ cells with migratory potential, overexpression of c-Myc may play a role in facilitating tumor growth and migration specifically for GBM1.
DISCUSSION
Studies depicting the mechanism of glioma formation have been hampered by the fact that GBM is a dynamic disease. In this study, our primary goal is to identify the molecular characteristics of MRI-classified group I GBM (GBM 1) through proteomic approach. Ultimately, these findings would offer a better idea for prediction or potential treatment of GBM1. We found that tumorassociated protease, AnnexinA2 critical in tumor invasion is highly expressed in GBM1. This finding supports the notion of recurrent GBM1 tumors that migrate great distances from the initial lesions. The elevated level of AnnexinA2 could potentially predict if tumors are going to be more invasive. Additionally, two HSPs, HSPA8 and HSP-beta1 were found to be elevated in GBM1 from our screening. Given that HSPA8 are induced by many different stress signals to promote cell survival in adverse pathological conditions, such as cancers (20), perhaps, anticancer therapy by targeting HSPA8 in GBM1 may be an option as well. While HSP-beta1 is known as estrogen-induced HSP involved in stress resistance (20,41), its connection with GBM remains unknown. We have interest to explore its roles in GBM in future studies. Furthermore, we found that a number of metabolic enzymes and ribosomal proteins are aberrantly accumulated in GBM1. Our results imply that amplification of proteins involved in metabolism and ribogenesis could participate, at least in part, to facilitate tumor growth. Consequently, this metabolic reprograming may allow cancer cells to survive environmental fluctuations, such as deficiency of nutrients. Therefore, therapies focused on controlling the abnormal metabolic circuitry and ribosomal biogenesis may be an option for treatment of GBM1.
Previously, MRI-classified GBM localizations have provided the majority of evidence demonstrating the intimate association between GBM1 and SVZ (11). Although our current results do not directly address whether SVZ cells give rise to GBM1, markers representing neural stem cell trait were found to be abundant in GBM1 specimens. Our finding highlights that GBM1 contains undifferentiated NSCs and neuroblasts potentially from the SVZ. Intriguingly, c-Myc was found to be abundant in the neuroblastpositive population in GBM1 specimen, suggesting that part of SVZ cells with high levels of c-Myc may be prone to transform in GBM1. Future experiments by using in vivo mouse model for fine-tuning c-Myc levels in the SVZ will address this speculation.
Frontiers in Oncology | Cancer Molecular Targets and Therapeutics
In conclusion, emerged findings from our study provide cellular components for specific classification and better prediction for this multifocal GBM, as well as reveal potential pathways and metabolites involved in GBM1 that we will focus on in future studies.
MICROWAVE AND MAGNETIC (M 2 ) SAMPLE PREPARATION
Protein lysate was extracted from cells using the RIPA lysis Buffer, the supernatant was collected followed by centrifugation at 14,000 × g for 15 min at 4°C and stored at −80°C until further use. Protein concentration was determined using Invitrogen EZQ Protein Quantitation Kit (Invitrogen, Grand Island, NY). C8 magnetic beads (BcMg, Bioclone Inc.) were used in this study. Briefly, 50 mg of beads were suspended in 1 ml of 50% methanol. Immediately before use, 100 µL of the beads were washed three times with equilibration buffer [200 mM NaCl, 0.1% trifluoroacetic acid (TFA)]. Protein lysate (25-100 µg at 1 µg/µL) was mixed with pre-equilibrated beads and one-third sample binding buffer (800 mM NaCl, 0.4% TFA) by volume. The mixture was incubated at room temperature for 5 min followed by removing the supernatant. The beads were washed twice with 150 µL of 40 mM triethylammonium bicarbonate (TEAB), and then 150 µL of 10 mM dithiothreitol (DTT) was added followed by microwave heating for 10 s. DTT solution was then removed and 150 µL of 50 mM iodoacetamide (IAA) was added followed by microwave heating for 10 s. Next, beads were washed twice with 150 µL of 40 mM TEAB and resuspended in 150 µL of 40 mM TEAB. In vitro proteolysis was performed with 4 µL of trypsin in a 1:25 trypsin-to-protein ratio (stock = 1 µg/µL in 50 mM acetic acid) and microwave heated for 20 s in triplicate. The supernatant was transferred to a new tube for immediate use or stored at −80°C. In this work, released tryptic peptides from digested protein lysates were analyzed by capillary liquid chromatography-Fourier-transform-tandem mass spectrometry (LC/FT/MS/MS) with protein database searching without isobaric labeling.
CAPILLARY LIQUID CHROMATOGRAPHY-FOURIER-TRANSFORM-TANDEM MASS SPECTROMETRY WITH PROTEIN DATABASE SEARCHING
Capillary LC/-FT-MS/MS was performed with a splitless nanoLC-2D pump (Eksigent, Livermore, CA, USA), a 50 µm-i.d. column packed with 7 cm of 3 µm-o.d. C18 particles, and a hybrid linear ion trap-Fourier-transform tandem mass spectrometer (LTQ-ELITE; ThermoFisher, San Jose, CA, USA) operated with a lock mass for calibration. The reverse-phase gradient was 2-62% of 0.1% formic acid (FA) in acetonitrile over 60 min at 350 nL/min. For unbiased analyses, the top six most abundant eluting ions were fragmented by data-dependent HCD with a mass resolution of 120,000 for MS and 15,000 for MS/MS and probability-based protein database searching of MS/MS spectra against the Trembl protein database (December 2012 release; 111,137 human protein sequences) with a 10-node MASCOT cluster (ver. 2.3.02, Matrix Science, London, UK) with the following search criteria: peak picking with Mascot Distiller; 10 ppm precursor ion mass tolerance, 0.8 Da product ion mass tolerance, three missed cleavages, trypsin, carbamidomethyl cysteines as a static modification, oxidized methionines and deamidated asparagines as variable modifications, and an ion score threshold of 20. The MASCOT score for a peptide is amino acid sequence-specific. According to the Matrix Science, reported score is −10Log(P). During a search, if 1500 peptides fell within the mass tolerance window about the precursor mass, and the significance threshold was chosen to be 0.05, this would translate into a score threshold as cut-off.
ALTERNATIVE PROTEOME WITH ARG-C DIGESTION AND ORBITRAP ELITE MASS SPECTROMETER FOR INDEPENDENT SPECIMENS
Five microliters of sample was added to an equal volume of 100 mM ammonium bicarbonate and 200 ng of endoproteinase Arg-C was added. Proteolytic digestion was carried out overnight in a 37°C waterbath. Approximately 1 µg of digested material was directly injected (no trap) onto a ThermoScientific nanoEasy LC coupled to a ThermoScientific Orbitrap Elite mass spectrometer. Peptide separations were performed on a reversed-phase column (75 µm × 250 mm) packed with Magic C 18 AQ (5 µm, 100Å resin; Michrom Bioresources, Auburn, CA, USA) directly mounted on the electrospray ion source. A 60-min gradient from 2 to 40% acetonitrile in 0.1% FA at a flow rate of 300 nL/min was used for chromatographic separations. A spray voltage of 2250 V was applied to the electrospray tip and the Orbitrap Elite instrument was operated in the data-dependent mode, switching automatically between MS survey scans in the Orbitrap (AGC target value 1,000,000, resolution 120,000, and injection time 250 ms) and MS/MS spectra acquisition in the linear ion trap (AGC target value of 10,000 and injection time 100 ms), HCD detected in the Orbitrap (AGC target value of 50,000, resolution 15,000, and injection time 250 ms), and ETD detected in the Orbitrap (AGC target value of 50,000, 15,000 resolution, and injection time 250 ms). The three most intense ions from the Fourier-transform (FT) full scan were selected for fragmentation in the linear ion trap by collisioninduced dissociation with normalized collision energy of 35%, fragmentation in the HCD cell with normalized collision energy of 35%, and ETD with 100 ms activation time. Selected ions were dynamically excluded for 30 s. Data analysis was performed using Proteome Discoverer 1.3 (Thermo Scientific, San Jose, CA, USA). The data were searched against IPI Human version 3.87 (International Protein Index) database. ArgC was set as the enzyme with maximum missed cleavages set to two. The precursor ion tolerance was set to 10 ppm and the fragment ion tolerance was set to 0.8 Da. All search results were run through Peptide Validator for scoring.
A total of six GBM specimens and five controls obtained from UCSF and CHTN/NCI had been analyzed by proteome. Because GBM1 specimens were located in temporal lobe close-by SVZ, we had three normal controls from temporal region near SVZ (n = 3). Since GBM1 is an infiltrating tumor, we also had normal control regions from temporal part near hippocampus (n = 1) and from frontal lobe (n = 1) used for proteomic screening. The experimental materials involving human specimens are approved by the Institutional Review Board (IRB) before starting research.
WESTERN BLOT ANALYSIS OF HUMAN BRAIN TISSUE SAMPLES
By using Glass Tenbroeck Tissue Grinder, cross sections of snapfrozen brain tissue samples were homogenized in 1 ml Buffer A with 1× protease inhibitor to extract cytoplasmic proteins. The resulting pellets were further homogenized in 1 ml RIPA buffer containing 1× protease inhibitor to isolate nuclear fraction. The total protein concentrations in cytoplasmic and nuclear fractions were quantified by Bradford assay (Bio-Rad). For western blot, equal amount of protein from normal or GBM specimens were denatured in final 1× SDS stop buffer and subjected to SDS-PAGE for western blot analysis with antibody against c-Myc (Ab5, Thermo Scientific #MS1054; 1:1000) and γ-tubulin (Sigma #T5326; 1:1000). Subsequently, HRP-conjugated secondary IgG (Cell Signaling; 1:5000) and enhanced chemiluminescence kit (ECL plus; GE) were used for detection.
ARA-C ANTI-MITOTIC TREATMENT
Anti-mitotic (2% Ara-C in 0.9% saline) or control solution (0.9% saline) was infused at the pial surface of the brain via an infusion cannula attached to a miniosmotic pump (Alzet, flow rate of 0.5 µL/h). Pumps were installed by following stereotaxic coordinates (anterior: 0, lateral: 1.1 relative to bregma, and 0 at the pial surface). After 7 days anti-mitotic treatment, mice were euthanized at day 0 post-Ara-C by trans-cardial perfusion with phosphate buffered saline (PBS) and 4% paraformaldehyde (PFA). Brains were then post-fixed overnight in 4% PFA and sunk in 30% sucrose prior to cryosectioning at 12 µ for immunostaining and imaging. All mouse experiments were approved by the guidelines of the Institutional Animal Care and Use Committee of the University of Texas at San Antonio, Fred Hutchinson Cancer Research Center (FHCRC), and University of California at San Francisco. | 2016-05-04T20:20:58.661Z | 2013-07-11T00:00:00.000 | {
"year": 2013,
"sha1": "ce914809e83c699ebed478a6025d67b85d8313f2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2013.00182/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce914809e83c699ebed478a6025d67b85d8313f2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.